2026-04-16 04:31:13.824966 | Job console starting 2026-04-16 04:31:13.852561 | Updating git repos 2026-04-16 04:31:14.461825 | Cloning repos into workspace 2026-04-16 04:31:14.710005 | Restoring repo states 2026-04-16 04:31:14.729233 | Merging changes 2026-04-16 04:31:14.729254 | Checking out repos 2026-04-16 04:31:15.087188 | Preparing playbooks 2026-04-16 04:31:15.821050 | Running Ansible setup 2026-04-16 04:31:20.743787 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-04-16 04:31:21.548205 | 2026-04-16 04:31:21.548379 | PLAY [Base pre] 2026-04-16 04:31:21.566305 | 2026-04-16 04:31:21.566466 | TASK [Setup log path fact] 2026-04-16 04:31:21.597346 | orchestrator | ok 2026-04-16 04:31:21.615360 | 2026-04-16 04:31:21.615545 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-16 04:31:21.657855 | orchestrator | ok 2026-04-16 04:31:21.671311 | 2026-04-16 04:31:21.671428 | TASK [emit-job-header : Print job information] 2026-04-16 04:31:21.717605 | # Job Information 2026-04-16 04:31:21.717856 | Ansible Version: 2.16.14 2026-04-16 04:31:21.717925 | Job: testbed-upgrade-stable-ubuntu-24.04 2026-04-16 04:31:21.717976 | Pipeline: periodic-midnight 2026-04-16 04:31:21.718011 | Executor: 521e9411259a 2026-04-16 04:31:21.718044 | Triggered by: https://github.com/osism/testbed 2026-04-16 04:31:21.718091 | Event ID: 7c8dea1006d24fcaafbfc066f90da822 2026-04-16 04:31:21.727079 | 2026-04-16 04:31:21.727221 | LOOP [emit-job-header : Print node information] 2026-04-16 04:31:21.853602 | orchestrator | ok: 2026-04-16 04:31:21.853998 | orchestrator | # Node Information 2026-04-16 04:31:21.854073 | orchestrator | Inventory Hostname: orchestrator 2026-04-16 04:31:21.854125 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-04-16 04:31:21.854170 | orchestrator | Username: zuul-testbed02 2026-04-16 04:31:21.854214 | orchestrator | Distro: Debian 12.13 2026-04-16 04:31:21.854265 | orchestrator | Provider: static-testbed 2026-04-16 04:31:21.854311 | orchestrator | Region: 2026-04-16 04:31:21.854356 | orchestrator | Label: testbed-orchestrator 2026-04-16 04:31:21.854398 | orchestrator | Product Name: OpenStack Nova 2026-04-16 04:31:21.854439 | orchestrator | Interface IP: 81.163.193.140 2026-04-16 04:31:21.879593 | 2026-04-16 04:31:21.879773 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-04-16 04:31:22.376570 | orchestrator -> localhost | changed 2026-04-16 04:31:22.387000 | 2026-04-16 04:31:22.387145 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-04-16 04:31:23.566632 | orchestrator -> localhost | changed 2026-04-16 04:31:23.585125 | 2026-04-16 04:31:23.585328 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-04-16 04:31:23.887570 | orchestrator -> localhost | ok 2026-04-16 04:31:23.903222 | 2026-04-16 04:31:23.903413 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-04-16 04:31:23.937039 | orchestrator | ok 2026-04-16 04:31:23.958272 | orchestrator | included: /var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-04-16 04:31:23.968224 | 2026-04-16 04:31:23.968358 | TASK [add-build-sshkey : Create Temp SSH key] 2026-04-16 04:31:25.649394 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-04-16 04:31:25.649646 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/work/219a5aa2066345788719ba53c87e0c69_id_rsa 2026-04-16 04:31:25.649691 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/work/219a5aa2066345788719ba53c87e0c69_id_rsa.pub 2026-04-16 04:31:25.649718 | orchestrator -> localhost | The key fingerprint is: 2026-04-16 04:31:25.649742 | orchestrator -> localhost | SHA256:qDvS6LT2ewDsHqZUtbm3dDtdRwLX0gdpNoNBcE1dEZc zuul-build-sshkey 2026-04-16 04:31:25.649766 | orchestrator -> localhost | The key's randomart image is: 2026-04-16 04:31:25.649805 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-04-16 04:31:25.649829 | orchestrator -> localhost | | .o+==*B| 2026-04-16 04:31:25.649851 | orchestrator -> localhost | | . o.oBE+| 2026-04-16 04:31:25.649872 | orchestrator -> localhost | | . . o oo.o.| 2026-04-16 04:31:25.649908 | orchestrator -> localhost | | o. o . . . | 2026-04-16 04:31:25.649930 | orchestrator -> localhost | | ... o S o | 2026-04-16 04:31:25.649956 | orchestrator -> localhost | | .+ .o o . . . | 2026-04-16 04:31:25.649976 | orchestrator -> localhost | |.+.+..o o o . . | 2026-04-16 04:31:25.649996 | orchestrator -> localhost | |..=.o... o . | 2026-04-16 04:31:25.650016 | orchestrator -> localhost | | ooo++ . | 2026-04-16 04:31:25.650037 | orchestrator -> localhost | +----[SHA256]-----+ 2026-04-16 04:31:25.650093 | orchestrator -> localhost | ok: Runtime: 0:00:01.161968 2026-04-16 04:31:25.658535 | 2026-04-16 04:31:25.658651 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-04-16 04:31:25.689354 | orchestrator | ok 2026-04-16 04:31:25.700404 | orchestrator | included: /var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-04-16 04:31:25.710164 | 2026-04-16 04:31:25.710273 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-04-16 04:31:25.734557 | orchestrator | skipping: Conditional result was False 2026-04-16 04:31:25.742814 | 2026-04-16 04:31:25.742971 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-04-16 04:31:26.370981 | orchestrator | changed 2026-04-16 04:31:26.380414 | 2026-04-16 04:31:26.380577 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-04-16 04:31:26.734701 | orchestrator | ok 2026-04-16 04:31:26.745062 | 2026-04-16 04:31:26.745225 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-04-16 04:31:27.171172 | orchestrator | ok 2026-04-16 04:31:27.180664 | 2026-04-16 04:31:27.180815 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-04-16 04:31:27.637196 | orchestrator | ok 2026-04-16 04:31:27.644853 | 2026-04-16 04:31:27.645015 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-04-16 04:31:27.669286 | orchestrator | skipping: Conditional result was False 2026-04-16 04:31:27.679258 | 2026-04-16 04:31:27.679397 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-04-16 04:31:28.140718 | orchestrator -> localhost | changed 2026-04-16 04:31:28.155836 | 2026-04-16 04:31:28.156024 | TASK [add-build-sshkey : Add back temp key] 2026-04-16 04:31:28.517989 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/work/219a5aa2066345788719ba53c87e0c69_id_rsa (zuul-build-sshkey) 2026-04-16 04:31:28.518401 | orchestrator -> localhost | ok: Runtime: 0:00:00.015955 2026-04-16 04:31:28.530021 | 2026-04-16 04:31:28.530160 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-04-16 04:31:28.974945 | orchestrator | ok 2026-04-16 04:31:28.984300 | 2026-04-16 04:31:28.984448 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-04-16 04:31:29.010673 | orchestrator | skipping: Conditional result was False 2026-04-16 04:31:29.067773 | 2026-04-16 04:31:29.067946 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-04-16 04:31:29.500194 | orchestrator | ok 2026-04-16 04:31:29.514560 | 2026-04-16 04:31:29.514690 | TASK [validate-host : Define zuul_info_dir fact] 2026-04-16 04:31:29.563464 | orchestrator | ok 2026-04-16 04:31:29.575489 | 2026-04-16 04:31:29.575610 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-04-16 04:31:29.907162 | orchestrator -> localhost | ok 2026-04-16 04:31:29.922996 | 2026-04-16 04:31:29.923173 | TASK [validate-host : Collect information about the host] 2026-04-16 04:31:31.268165 | orchestrator | ok 2026-04-16 04:31:31.287065 | 2026-04-16 04:31:31.287240 | TASK [validate-host : Sanitize hostname] 2026-04-16 04:31:31.360257 | orchestrator | ok 2026-04-16 04:31:31.370517 | 2026-04-16 04:31:31.370693 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-04-16 04:31:31.937929 | orchestrator -> localhost | changed 2026-04-16 04:31:31.950652 | 2026-04-16 04:31:31.950782 | TASK [validate-host : Collect information about zuul worker] 2026-04-16 04:31:32.391326 | orchestrator | ok 2026-04-16 04:31:32.399729 | 2026-04-16 04:31:32.399919 | TASK [validate-host : Write out all zuul information for each host] 2026-04-16 04:31:32.994286 | orchestrator -> localhost | changed 2026-04-16 04:31:33.007254 | 2026-04-16 04:31:33.007362 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-04-16 04:31:33.318125 | orchestrator | ok 2026-04-16 04:31:33.327425 | 2026-04-16 04:31:33.327549 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-04-16 04:31:54.528773 | orchestrator | changed: 2026-04-16 04:31:54.529049 | orchestrator | .d..t...... src/ 2026-04-16 04:31:54.529126 | orchestrator | .d..t...... src/github.com/ 2026-04-16 04:31:54.529166 | orchestrator | .d..t...... src/github.com/osism/ 2026-04-16 04:31:54.529201 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-04-16 04:31:54.529236 | orchestrator | RedHat.yml 2026-04-16 04:31:54.546622 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-04-16 04:31:54.546639 | orchestrator | RedHat.yml 2026-04-16 04:31:54.546695 | orchestrator | = 2.2.0"... 2026-04-16 04:32:07.305970 | orchestrator | - Finding latest version of hashicorp/null... 2026-04-16 04:32:07.322071 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-04-16 04:32:07.807297 | orchestrator | - Installing hashicorp/local v2.8.0... 2026-04-16 04:32:08.561075 | orchestrator | - Installed hashicorp/local v2.8.0 (signed, key ID 0C0AF313E5FD9F80) 2026-04-16 04:32:08.931532 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-04-16 04:32:09.577244 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-04-16 04:32:09.967283 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-04-16 04:32:10.951927 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-04-16 04:32:10.952151 | orchestrator | 2026-04-16 04:32:10.952176 | orchestrator | Providers are signed by their developers. 2026-04-16 04:32:10.952190 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-04-16 04:32:10.952205 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-04-16 04:32:10.952230 | orchestrator | 2026-04-16 04:32:10.952250 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-04-16 04:32:10.952287 | orchestrator | selections it made above. Include this file in your version control repository 2026-04-16 04:32:10.952308 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-04-16 04:32:10.952333 | orchestrator | you run "tofu init" in the future. 2026-04-16 04:32:10.952360 | orchestrator | 2026-04-16 04:32:10.952372 | orchestrator | OpenTofu has been successfully initialized! 2026-04-16 04:32:10.952383 | orchestrator | 2026-04-16 04:32:10.952394 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-04-16 04:32:10.952405 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-04-16 04:32:10.952416 | orchestrator | should now work. 2026-04-16 04:32:10.952431 | orchestrator | 2026-04-16 04:32:10.952449 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-04-16 04:32:10.952467 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-04-16 04:32:10.952485 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-04-16 04:32:11.122397 | orchestrator | Created and switched to workspace "ci"! 2026-04-16 04:32:11.122449 | orchestrator | 2026-04-16 04:32:11.122456 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-04-16 04:32:11.122461 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-04-16 04:32:11.122485 | orchestrator | for this configuration. 2026-04-16 04:32:11.272204 | orchestrator | ci.auto.tfvars 2026-04-16 04:32:11.276374 | orchestrator | default_custom.tf 2026-04-16 04:32:12.293459 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-04-16 04:32:12.824843 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-04-16 04:32:13.080227 | orchestrator | 2026-04-16 04:32:13.080312 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-04-16 04:32:13.080323 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-04-16 04:32:13.080360 | orchestrator | + create 2026-04-16 04:32:13.080384 | orchestrator | <= read (data resources) 2026-04-16 04:32:13.080404 | orchestrator | 2026-04-16 04:32:13.080412 | orchestrator | OpenTofu will perform the following actions: 2026-04-16 04:32:13.080573 | orchestrator | 2026-04-16 04:32:13.080596 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-04-16 04:32:13.080604 | orchestrator | # (config refers to values not yet known) 2026-04-16 04:32:13.080611 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-04-16 04:32:13.080618 | orchestrator | + checksum = (known after apply) 2026-04-16 04:32:13.080624 | orchestrator | + created_at = (known after apply) 2026-04-16 04:32:13.080631 | orchestrator | + file = (known after apply) 2026-04-16 04:32:13.080637 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.080688 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.080696 | orchestrator | + min_disk_gb = (known after apply) 2026-04-16 04:32:13.080703 | orchestrator | + min_ram_mb = (known after apply) 2026-04-16 04:32:13.080709 | orchestrator | + most_recent = true 2026-04-16 04:32:13.080715 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.080722 | orchestrator | + protected = (known after apply) 2026-04-16 04:32:13.080728 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.080738 | orchestrator | + schema = (known after apply) 2026-04-16 04:32:13.080744 | orchestrator | + size_bytes = (known after apply) 2026-04-16 04:32:13.080750 | orchestrator | + tags = (known after apply) 2026-04-16 04:32:13.080757 | orchestrator | + updated_at = (known after apply) 2026-04-16 04:32:13.080763 | orchestrator | } 2026-04-16 04:32:13.080890 | orchestrator | 2026-04-16 04:32:13.080909 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-04-16 04:32:13.080917 | orchestrator | # (config refers to values not yet known) 2026-04-16 04:32:13.080923 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-04-16 04:32:13.080930 | orchestrator | + checksum = (known after apply) 2026-04-16 04:32:13.080936 | orchestrator | + created_at = (known after apply) 2026-04-16 04:32:13.080942 | orchestrator | + file = (known after apply) 2026-04-16 04:32:13.080949 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.080955 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.081017 | orchestrator | + min_disk_gb = (known after apply) 2026-04-16 04:32:13.081024 | orchestrator | + min_ram_mb = (known after apply) 2026-04-16 04:32:13.081030 | orchestrator | + most_recent = true 2026-04-16 04:32:13.081037 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.081043 | orchestrator | + protected = (known after apply) 2026-04-16 04:32:13.081050 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.081056 | orchestrator | + schema = (known after apply) 2026-04-16 04:32:13.081062 | orchestrator | + size_bytes = (known after apply) 2026-04-16 04:32:13.081069 | orchestrator | + tags = (known after apply) 2026-04-16 04:32:13.081075 | orchestrator | + updated_at = (known after apply) 2026-04-16 04:32:13.081082 | orchestrator | } 2026-04-16 04:32:13.081204 | orchestrator | 2026-04-16 04:32:13.081224 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-04-16 04:32:13.081232 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-04-16 04:32:13.081239 | orchestrator | + content = (known after apply) 2026-04-16 04:32:13.081245 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-16 04:32:13.081251 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-16 04:32:13.081258 | orchestrator | + content_md5 = (known after apply) 2026-04-16 04:32:13.081264 | orchestrator | + content_sha1 = (known after apply) 2026-04-16 04:32:13.081270 | orchestrator | + content_sha256 = (known after apply) 2026-04-16 04:32:13.081277 | orchestrator | + content_sha512 = (known after apply) 2026-04-16 04:32:13.081283 | orchestrator | + directory_permission = "0777" 2026-04-16 04:32:13.081289 | orchestrator | + file_permission = "0644" 2026-04-16 04:32:13.081296 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-04-16 04:32:13.081302 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.081308 | orchestrator | } 2026-04-16 04:32:13.081415 | orchestrator | 2026-04-16 04:32:13.081434 | orchestrator | # local_file.id_rsa_pub will be created 2026-04-16 04:32:13.081441 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-04-16 04:32:13.081447 | orchestrator | + content = (known after apply) 2026-04-16 04:32:13.081453 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-16 04:32:13.081459 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-16 04:32:13.081466 | orchestrator | + content_md5 = (known after apply) 2026-04-16 04:32:13.081472 | orchestrator | + content_sha1 = (known after apply) 2026-04-16 04:32:13.081478 | orchestrator | + content_sha256 = (known after apply) 2026-04-16 04:32:13.081493 | orchestrator | + content_sha512 = (known after apply) 2026-04-16 04:32:13.081500 | orchestrator | + directory_permission = "0777" 2026-04-16 04:32:13.081506 | orchestrator | + file_permission = "0644" 2026-04-16 04:32:13.081520 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-04-16 04:32:13.081526 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.081532 | orchestrator | } 2026-04-16 04:32:13.081700 | orchestrator | 2026-04-16 04:32:13.081725 | orchestrator | # local_file.inventory will be created 2026-04-16 04:32:13.081732 | orchestrator | + resource "local_file" "inventory" { 2026-04-16 04:32:13.081738 | orchestrator | + content = (known after apply) 2026-04-16 04:32:13.081745 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-16 04:32:13.081751 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-16 04:32:13.081758 | orchestrator | + content_md5 = (known after apply) 2026-04-16 04:32:13.081764 | orchestrator | + content_sha1 = (known after apply) 2026-04-16 04:32:13.081771 | orchestrator | + content_sha256 = (known after apply) 2026-04-16 04:32:13.081777 | orchestrator | + content_sha512 = (known after apply) 2026-04-16 04:32:13.081783 | orchestrator | + directory_permission = "0777" 2026-04-16 04:32:13.081789 | orchestrator | + file_permission = "0644" 2026-04-16 04:32:13.081796 | orchestrator | + filename = "inventory.ci" 2026-04-16 04:32:13.081802 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.081808 | orchestrator | } 2026-04-16 04:32:13.081920 | orchestrator | 2026-04-16 04:32:13.081938 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-04-16 04:32:13.081945 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-04-16 04:32:13.081951 | orchestrator | + content = (sensitive value) 2026-04-16 04:32:13.081957 | orchestrator | + content_base64sha256 = (known after apply) 2026-04-16 04:32:13.081963 | orchestrator | + content_base64sha512 = (known after apply) 2026-04-16 04:32:13.081970 | orchestrator | + content_md5 = (known after apply) 2026-04-16 04:32:13.081976 | orchestrator | + content_sha1 = (known after apply) 2026-04-16 04:32:13.081982 | orchestrator | + content_sha256 = (known after apply) 2026-04-16 04:32:13.081988 | orchestrator | + content_sha512 = (known after apply) 2026-04-16 04:32:13.081994 | orchestrator | + directory_permission = "0700" 2026-04-16 04:32:13.082001 | orchestrator | + file_permission = "0600" 2026-04-16 04:32:13.082007 | orchestrator | + filename = ".id_rsa.ci" 2026-04-16 04:32:13.082013 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082042 | orchestrator | } 2026-04-16 04:32:13.082078 | orchestrator | 2026-04-16 04:32:13.082095 | orchestrator | # null_resource.node_semaphore will be created 2026-04-16 04:32:13.082102 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-04-16 04:32:13.082109 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082115 | orchestrator | } 2026-04-16 04:32:13.082220 | orchestrator | 2026-04-16 04:32:13.082236 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-04-16 04:32:13.082243 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-04-16 04:32:13.082248 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.082254 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.082259 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082265 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.082270 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.082276 | orchestrator | + name = "testbed-volume-manager-base" 2026-04-16 04:32:13.082281 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.082287 | orchestrator | + size = 80 2026-04-16 04:32:13.082292 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.082298 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.082303 | orchestrator | } 2026-04-16 04:32:13.082391 | orchestrator | 2026-04-16 04:32:13.082407 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-04-16 04:32:13.082413 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-16 04:32:13.082418 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.082424 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.082429 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082441 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.082447 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.082452 | orchestrator | + name = "testbed-volume-0-node-base" 2026-04-16 04:32:13.082457 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.082463 | orchestrator | + size = 80 2026-04-16 04:32:13.082468 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.082474 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.082479 | orchestrator | } 2026-04-16 04:32:13.082566 | orchestrator | 2026-04-16 04:32:13.082581 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-04-16 04:32:13.082587 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-16 04:32:13.082593 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.082598 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.082604 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082609 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.082615 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.082620 | orchestrator | + name = "testbed-volume-1-node-base" 2026-04-16 04:32:13.082625 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.082631 | orchestrator | + size = 80 2026-04-16 04:32:13.082636 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.082642 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.082647 | orchestrator | } 2026-04-16 04:32:13.082749 | orchestrator | 2026-04-16 04:32:13.082765 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-04-16 04:32:13.082771 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-16 04:32:13.082776 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.082782 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.082787 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082793 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.082798 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.082803 | orchestrator | + name = "testbed-volume-2-node-base" 2026-04-16 04:32:13.082809 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.082814 | orchestrator | + size = 80 2026-04-16 04:32:13.082825 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.082831 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.082836 | orchestrator | } 2026-04-16 04:32:13.082924 | orchestrator | 2026-04-16 04:32:13.082940 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-04-16 04:32:13.082947 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-16 04:32:13.082952 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.082958 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.082963 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.082968 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.082974 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.082979 | orchestrator | + name = "testbed-volume-3-node-base" 2026-04-16 04:32:13.082985 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.082990 | orchestrator | + size = 80 2026-04-16 04:32:13.082996 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083001 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083006 | orchestrator | } 2026-04-16 04:32:13.083087 | orchestrator | 2026-04-16 04:32:13.083102 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-04-16 04:32:13.083108 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-16 04:32:13.083114 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.083119 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.083125 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.083135 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.083141 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.083146 | orchestrator | + name = "testbed-volume-4-node-base" 2026-04-16 04:32:13.083152 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.083157 | orchestrator | + size = 80 2026-04-16 04:32:13.083162 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083168 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083173 | orchestrator | } 2026-04-16 04:32:13.083256 | orchestrator | 2026-04-16 04:32:13.083271 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-04-16 04:32:13.083277 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-04-16 04:32:13.083283 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.083288 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.083293 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.083299 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.083304 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.083310 | orchestrator | + name = "testbed-volume-5-node-base" 2026-04-16 04:32:13.083315 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.083320 | orchestrator | + size = 80 2026-04-16 04:32:13.083326 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083331 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083337 | orchestrator | } 2026-04-16 04:32:13.083417 | orchestrator | 2026-04-16 04:32:13.083432 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-04-16 04:32:13.083439 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.083444 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.083450 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.083455 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.083461 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.083466 | orchestrator | + name = "testbed-volume-0-node-3" 2026-04-16 04:32:13.083472 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.083477 | orchestrator | + size = 20 2026-04-16 04:32:13.083483 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083489 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083494 | orchestrator | } 2026-04-16 04:32:13.083572 | orchestrator | 2026-04-16 04:32:13.083587 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-04-16 04:32:13.083593 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.083599 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.083604 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.083609 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.083615 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.083620 | orchestrator | + name = "testbed-volume-1-node-4" 2026-04-16 04:32:13.083625 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.083631 | orchestrator | + size = 20 2026-04-16 04:32:13.083636 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083642 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083647 | orchestrator | } 2026-04-16 04:32:13.083741 | orchestrator | 2026-04-16 04:32:13.083758 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-04-16 04:32:13.083764 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.083770 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.083775 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.083780 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.083786 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.083791 | orchestrator | + name = "testbed-volume-2-node-5" 2026-04-16 04:32:13.083797 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.083811 | orchestrator | + size = 20 2026-04-16 04:32:13.083817 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083822 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083827 | orchestrator | } 2026-04-16 04:32:13.083906 | orchestrator | 2026-04-16 04:32:13.083922 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-04-16 04:32:13.083928 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.083934 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.083939 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.083944 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.083954 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.083960 | orchestrator | + name = "testbed-volume-3-node-3" 2026-04-16 04:32:13.083965 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.083971 | orchestrator | + size = 20 2026-04-16 04:32:13.083976 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.083982 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.083987 | orchestrator | } 2026-04-16 04:32:13.084068 | orchestrator | 2026-04-16 04:32:13.084084 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-04-16 04:32:13.084091 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.084096 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.084102 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.084107 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.084113 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.084118 | orchestrator | + name = "testbed-volume-4-node-4" 2026-04-16 04:32:13.084124 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.084129 | orchestrator | + size = 20 2026-04-16 04:32:13.084134 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.084140 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.084145 | orchestrator | } 2026-04-16 04:32:13.084221 | orchestrator | 2026-04-16 04:32:13.084238 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-04-16 04:32:13.084244 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.084249 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.084255 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.084260 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.084265 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.084271 | orchestrator | + name = "testbed-volume-5-node-5" 2026-04-16 04:32:13.084276 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.084282 | orchestrator | + size = 20 2026-04-16 04:32:13.084287 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.084292 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.084298 | orchestrator | } 2026-04-16 04:32:13.084375 | orchestrator | 2026-04-16 04:32:13.084392 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-04-16 04:32:13.084398 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.084404 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.084409 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.084415 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.084420 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.084426 | orchestrator | + name = "testbed-volume-6-node-3" 2026-04-16 04:32:13.084431 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.084436 | orchestrator | + size = 20 2026-04-16 04:32:13.084442 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.084447 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.084453 | orchestrator | } 2026-04-16 04:32:13.084527 | orchestrator | 2026-04-16 04:32:13.084543 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-04-16 04:32:13.084550 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.084560 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.084566 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.084571 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.084577 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.084582 | orchestrator | + name = "testbed-volume-7-node-4" 2026-04-16 04:32:13.084587 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.084593 | orchestrator | + size = 20 2026-04-16 04:32:13.084598 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.084604 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.084609 | orchestrator | } 2026-04-16 04:32:13.084703 | orchestrator | 2026-04-16 04:32:13.084719 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-04-16 04:32:13.084725 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-04-16 04:32:13.084731 | orchestrator | + attachment = (known after apply) 2026-04-16 04:32:13.084736 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.084741 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.084747 | orchestrator | + metadata = (known after apply) 2026-04-16 04:32:13.084752 | orchestrator | + name = "testbed-volume-8-node-5" 2026-04-16 04:32:13.084758 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.084763 | orchestrator | + size = 20 2026-04-16 04:32:13.084768 | orchestrator | + volume_retype_policy = "never" 2026-04-16 04:32:13.084783 | orchestrator | + volume_type = "ssd" 2026-04-16 04:32:13.084789 | orchestrator | } 2026-04-16 04:32:13.085055 | orchestrator | 2026-04-16 04:32:13.085073 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-04-16 04:32:13.085079 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-04-16 04:32:13.085085 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.085090 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.085096 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.085101 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.085107 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.085112 | orchestrator | + config_drive = true 2026-04-16 04:32:13.085122 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.085128 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.085133 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-04-16 04:32:13.085138 | orchestrator | + force_delete = false 2026-04-16 04:32:13.085144 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.085149 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.085154 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.085160 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.085165 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.085170 | orchestrator | + name = "testbed-manager" 2026-04-16 04:32:13.085176 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.085181 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.085186 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.085192 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.085197 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.085203 | orchestrator | + user_data = (sensitive value) 2026-04-16 04:32:13.085208 | orchestrator | 2026-04-16 04:32:13.085214 | orchestrator | + block_device { 2026-04-16 04:32:13.085219 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.085225 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.085230 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.085236 | orchestrator | + multiattach = false 2026-04-16 04:32:13.085241 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.085246 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.085256 | orchestrator | } 2026-04-16 04:32:13.085262 | orchestrator | 2026-04-16 04:32:13.085268 | orchestrator | + network { 2026-04-16 04:32:13.085273 | orchestrator | + access_network = false 2026-04-16 04:32:13.085278 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.085284 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.085289 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.085294 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.085300 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.085305 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.085311 | orchestrator | } 2026-04-16 04:32:13.085316 | orchestrator | } 2026-04-16 04:32:13.085619 | orchestrator | 2026-04-16 04:32:13.085643 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-04-16 04:32:13.085649 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-16 04:32:13.085748 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.085754 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.085759 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.085765 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.085770 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.085776 | orchestrator | + config_drive = true 2026-04-16 04:32:13.085781 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.085787 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.085792 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-16 04:32:13.085798 | orchestrator | + force_delete = false 2026-04-16 04:32:13.085803 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.085809 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.085815 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.085820 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.085825 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.085831 | orchestrator | + name = "testbed-node-0" 2026-04-16 04:32:13.085836 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.085842 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.085847 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.085852 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.085858 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.085864 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-16 04:32:13.085869 | orchestrator | 2026-04-16 04:32:13.085875 | orchestrator | + block_device { 2026-04-16 04:32:13.085880 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.085886 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.085891 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.085897 | orchestrator | + multiattach = false 2026-04-16 04:32:13.085902 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.085908 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.085914 | orchestrator | } 2026-04-16 04:32:13.085919 | orchestrator | 2026-04-16 04:32:13.085924 | orchestrator | + network { 2026-04-16 04:32:13.085930 | orchestrator | + access_network = false 2026-04-16 04:32:13.085935 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.085941 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.085946 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.085952 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.085957 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.085963 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.085968 | orchestrator | } 2026-04-16 04:32:13.085974 | orchestrator | } 2026-04-16 04:32:13.086258 | orchestrator | 2026-04-16 04:32:13.086276 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-04-16 04:32:13.086282 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-16 04:32:13.086287 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.086299 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.086304 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.086309 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.086314 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.086319 | orchestrator | + config_drive = true 2026-04-16 04:32:13.086323 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.086328 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.086333 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-16 04:32:13.086338 | orchestrator | + force_delete = false 2026-04-16 04:32:13.086343 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.086347 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.086352 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.086357 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.086361 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.086366 | orchestrator | + name = "testbed-node-1" 2026-04-16 04:32:13.086371 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.086376 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.086381 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.086385 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.086390 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.086399 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-16 04:32:13.086404 | orchestrator | 2026-04-16 04:32:13.086409 | orchestrator | + block_device { 2026-04-16 04:32:13.086414 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.086419 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.086424 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.086428 | orchestrator | + multiattach = false 2026-04-16 04:32:13.086433 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.086438 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.086443 | orchestrator | } 2026-04-16 04:32:13.086447 | orchestrator | 2026-04-16 04:32:13.086452 | orchestrator | + network { 2026-04-16 04:32:13.086457 | orchestrator | + access_network = false 2026-04-16 04:32:13.086462 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.086466 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.086471 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.086476 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.086481 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.086485 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.086490 | orchestrator | } 2026-04-16 04:32:13.086495 | orchestrator | } 2026-04-16 04:32:13.086743 | orchestrator | 2026-04-16 04:32:13.086760 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-04-16 04:32:13.086766 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-16 04:32:13.086771 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.086776 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.086782 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.086787 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.086791 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.086796 | orchestrator | + config_drive = true 2026-04-16 04:32:13.086801 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.086806 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.086810 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-16 04:32:13.086815 | orchestrator | + force_delete = false 2026-04-16 04:32:13.086820 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.086825 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.086830 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.086839 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.086844 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.086849 | orchestrator | + name = "testbed-node-2" 2026-04-16 04:32:13.086854 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.086858 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.086863 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.086868 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.086873 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.086878 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-16 04:32:13.086882 | orchestrator | 2026-04-16 04:32:13.086887 | orchestrator | + block_device { 2026-04-16 04:32:13.086892 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.086897 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.086902 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.086907 | orchestrator | + multiattach = false 2026-04-16 04:32:13.086911 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.086916 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.086921 | orchestrator | } 2026-04-16 04:32:13.086926 | orchestrator | 2026-04-16 04:32:13.086930 | orchestrator | + network { 2026-04-16 04:32:13.086935 | orchestrator | + access_network = false 2026-04-16 04:32:13.086940 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.086945 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.086950 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.086954 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.086959 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.086964 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.086969 | orchestrator | } 2026-04-16 04:32:13.086974 | orchestrator | } 2026-04-16 04:32:13.087223 | orchestrator | 2026-04-16 04:32:13.087242 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-04-16 04:32:13.087248 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-16 04:32:13.087253 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.087258 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.087262 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.087267 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.087272 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.087277 | orchestrator | + config_drive = true 2026-04-16 04:32:13.087282 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.087286 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.087291 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-16 04:32:13.087296 | orchestrator | + force_delete = false 2026-04-16 04:32:13.087300 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.087305 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.087310 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.087315 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.087320 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.087325 | orchestrator | + name = "testbed-node-3" 2026-04-16 04:32:13.087329 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.087334 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.087339 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.087344 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.087348 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.087353 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-16 04:32:13.087358 | orchestrator | 2026-04-16 04:32:13.087363 | orchestrator | + block_device { 2026-04-16 04:32:13.087367 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.087372 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.087377 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.087386 | orchestrator | + multiattach = false 2026-04-16 04:32:13.087391 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.087396 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.087401 | orchestrator | } 2026-04-16 04:32:13.087406 | orchestrator | 2026-04-16 04:32:13.087410 | orchestrator | + network { 2026-04-16 04:32:13.087415 | orchestrator | + access_network = false 2026-04-16 04:32:13.087420 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.087425 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.087429 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.087434 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.087439 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.087444 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.087448 | orchestrator | } 2026-04-16 04:32:13.087453 | orchestrator | } 2026-04-16 04:32:13.087751 | orchestrator | 2026-04-16 04:32:13.087770 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-04-16 04:32:13.087776 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-16 04:32:13.087781 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.087786 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.087790 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.087795 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.087800 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.087805 | orchestrator | + config_drive = true 2026-04-16 04:32:13.087810 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.087815 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.087819 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-16 04:32:13.087824 | orchestrator | + force_delete = false 2026-04-16 04:32:13.087829 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.087834 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.087839 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.087844 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.087848 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.087853 | orchestrator | + name = "testbed-node-4" 2026-04-16 04:32:13.087858 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.087863 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.087867 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.087872 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.087877 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.087882 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-16 04:32:13.087887 | orchestrator | 2026-04-16 04:32:13.087891 | orchestrator | + block_device { 2026-04-16 04:32:13.087896 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.087901 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.087906 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.087910 | orchestrator | + multiattach = false 2026-04-16 04:32:13.087915 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.087920 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.087924 | orchestrator | } 2026-04-16 04:32:13.087929 | orchestrator | 2026-04-16 04:32:13.087934 | orchestrator | + network { 2026-04-16 04:32:13.087939 | orchestrator | + access_network = false 2026-04-16 04:32:13.087943 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.087948 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.087953 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.087958 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.087962 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.087967 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.087972 | orchestrator | } 2026-04-16 04:32:13.087977 | orchestrator | } 2026-04-16 04:32:13.088257 | orchestrator | 2026-04-16 04:32:13.088275 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-04-16 04:32:13.088280 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-04-16 04:32:13.088285 | orchestrator | + access_ip_v4 = (known after apply) 2026-04-16 04:32:13.088290 | orchestrator | + access_ip_v6 = (known after apply) 2026-04-16 04:32:13.088295 | orchestrator | + all_metadata = (known after apply) 2026-04-16 04:32:13.088300 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.088304 | orchestrator | + availability_zone = "nova" 2026-04-16 04:32:13.088309 | orchestrator | + config_drive = true 2026-04-16 04:32:13.088314 | orchestrator | + created = (known after apply) 2026-04-16 04:32:13.088319 | orchestrator | + flavor_id = (known after apply) 2026-04-16 04:32:13.088323 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-04-16 04:32:13.088328 | orchestrator | + force_delete = false 2026-04-16 04:32:13.088333 | orchestrator | + hypervisor_hostname = (known after apply) 2026-04-16 04:32:13.088338 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.088343 | orchestrator | + image_id = (known after apply) 2026-04-16 04:32:13.088347 | orchestrator | + image_name = (known after apply) 2026-04-16 04:32:13.088352 | orchestrator | + key_pair = "testbed" 2026-04-16 04:32:13.088357 | orchestrator | + name = "testbed-node-5" 2026-04-16 04:32:13.088361 | orchestrator | + power_state = "active" 2026-04-16 04:32:13.088366 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.088373 | orchestrator | + security_groups = (known after apply) 2026-04-16 04:32:13.088381 | orchestrator | + stop_before_destroy = false 2026-04-16 04:32:13.088389 | orchestrator | + updated = (known after apply) 2026-04-16 04:32:13.088395 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-04-16 04:32:13.088400 | orchestrator | 2026-04-16 04:32:13.088405 | orchestrator | + block_device { 2026-04-16 04:32:13.088410 | orchestrator | + boot_index = 0 2026-04-16 04:32:13.088414 | orchestrator | + delete_on_termination = false 2026-04-16 04:32:13.088419 | orchestrator | + destination_type = "volume" 2026-04-16 04:32:13.088424 | orchestrator | + multiattach = false 2026-04-16 04:32:13.088429 | orchestrator | + source_type = "volume" 2026-04-16 04:32:13.088433 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.088438 | orchestrator | } 2026-04-16 04:32:13.088443 | orchestrator | 2026-04-16 04:32:13.088448 | orchestrator | + network { 2026-04-16 04:32:13.088452 | orchestrator | + access_network = false 2026-04-16 04:32:13.088457 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-04-16 04:32:13.088462 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-04-16 04:32:13.088467 | orchestrator | + mac = (known after apply) 2026-04-16 04:32:13.088472 | orchestrator | + name = (known after apply) 2026-04-16 04:32:13.088476 | orchestrator | + port = (known after apply) 2026-04-16 04:32:13.088481 | orchestrator | + uuid = (known after apply) 2026-04-16 04:32:13.088486 | orchestrator | } 2026-04-16 04:32:13.088491 | orchestrator | } 2026-04-16 04:32:13.088559 | orchestrator | 2026-04-16 04:32:13.088574 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-04-16 04:32:13.088579 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-04-16 04:32:13.088584 | orchestrator | + fingerprint = (known after apply) 2026-04-16 04:32:13.088589 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.088594 | orchestrator | + name = "testbed" 2026-04-16 04:32:13.088598 | orchestrator | + private_key = (sensitive value) 2026-04-16 04:32:13.088603 | orchestrator | + public_key = (known after apply) 2026-04-16 04:32:13.088608 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.088613 | orchestrator | + user_id = (known after apply) 2026-04-16 04:32:13.088618 | orchestrator | } 2026-04-16 04:32:13.088680 | orchestrator | 2026-04-16 04:32:13.088695 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-04-16 04:32:13.088701 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.088711 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.088716 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.088721 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.088726 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.088734 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.088739 | orchestrator | } 2026-04-16 04:32:13.088791 | orchestrator | 2026-04-16 04:32:13.088805 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-04-16 04:32:13.088811 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.088816 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.088820 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.088825 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.088830 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.088835 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.088840 | orchestrator | } 2026-04-16 04:32:13.088901 | orchestrator | 2026-04-16 04:32:13.088915 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-04-16 04:32:13.088921 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.088926 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.088931 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.088935 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.088940 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.088945 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.088950 | orchestrator | } 2026-04-16 04:32:13.088992 | orchestrator | 2026-04-16 04:32:13.089005 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-04-16 04:32:13.089011 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.089016 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.089021 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089025 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.089030 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089035 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.089040 | orchestrator | } 2026-04-16 04:32:13.089082 | orchestrator | 2026-04-16 04:32:13.089096 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-04-16 04:32:13.089101 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.089106 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.089111 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089116 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.089121 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089125 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.089130 | orchestrator | } 2026-04-16 04:32:13.089174 | orchestrator | 2026-04-16 04:32:13.089188 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-04-16 04:32:13.089194 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.089199 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.089204 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089209 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.089213 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089218 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.089223 | orchestrator | } 2026-04-16 04:32:13.089273 | orchestrator | 2026-04-16 04:32:13.089288 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-04-16 04:32:13.089294 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.089298 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.089303 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089308 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.089313 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089325 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.089330 | orchestrator | } 2026-04-16 04:32:13.089377 | orchestrator | 2026-04-16 04:32:13.089399 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-04-16 04:32:13.089405 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.089410 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.089415 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089420 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.089425 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089430 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.089434 | orchestrator | } 2026-04-16 04:32:13.089538 | orchestrator | 2026-04-16 04:32:13.089553 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-04-16 04:32:13.089558 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-04-16 04:32:13.089563 | orchestrator | + device = (known after apply) 2026-04-16 04:32:13.089568 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089573 | orchestrator | + instance_id = (known after apply) 2026-04-16 04:32:13.089577 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089583 | orchestrator | + volume_id = (known after apply) 2026-04-16 04:32:13.089587 | orchestrator | } 2026-04-16 04:32:13.089636 | orchestrator | 2026-04-16 04:32:13.089691 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-04-16 04:32:13.089700 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-04-16 04:32:13.089705 | orchestrator | + fixed_ip = (known after apply) 2026-04-16 04:32:13.089709 | orchestrator | + floating_ip = (known after apply) 2026-04-16 04:32:13.089714 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089719 | orchestrator | + port_id = (known after apply) 2026-04-16 04:32:13.089724 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089729 | orchestrator | } 2026-04-16 04:32:13.089825 | orchestrator | 2026-04-16 04:32:13.089839 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-04-16 04:32:13.089845 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-04-16 04:32:13.089850 | orchestrator | + address = (known after apply) 2026-04-16 04:32:13.089855 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.089865 | orchestrator | + dns_domain = (known after apply) 2026-04-16 04:32:13.089873 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.089881 | orchestrator | + fixed_ip = (known after apply) 2026-04-16 04:32:13.089889 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.089898 | orchestrator | + pool = "public" 2026-04-16 04:32:13.089906 | orchestrator | + port_id = (known after apply) 2026-04-16 04:32:13.089914 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.089921 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.089929 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.089937 | orchestrator | } 2026-04-16 04:32:13.090135 | orchestrator | 2026-04-16 04:32:13.090168 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-04-16 04:32:13.090176 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-04-16 04:32:13.090184 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.090191 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.090200 | orchestrator | + availability_zone_hints = [ 2026-04-16 04:32:13.090207 | orchestrator | + "nova", 2026-04-16 04:32:13.090216 | orchestrator | ] 2026-04-16 04:32:13.090224 | orchestrator | + dns_domain = (known after apply) 2026-04-16 04:32:13.090229 | orchestrator | + external = (known after apply) 2026-04-16 04:32:13.090233 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.090238 | orchestrator | + mtu = (known after apply) 2026-04-16 04:32:13.090243 | orchestrator | + name = "net-testbed-management" 2026-04-16 04:32:13.090248 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.090266 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.090270 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.090275 | orchestrator | + shared = (known after apply) 2026-04-16 04:32:13.090280 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.090284 | orchestrator | + transparent_vlan = (known after apply) 2026-04-16 04:32:13.090289 | orchestrator | 2026-04-16 04:32:13.090293 | orchestrator | + segments (known after apply) 2026-04-16 04:32:13.090298 | orchestrator | } 2026-04-16 04:32:13.090497 | orchestrator | 2026-04-16 04:32:13.090515 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-04-16 04:32:13.090520 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-04-16 04:32:13.090525 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.090530 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.090534 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.090539 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.090543 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.090548 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.090553 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.090557 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.090562 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.090566 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.090571 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.090575 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.090580 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.090584 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.090589 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.090593 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.090598 | orchestrator | 2026-04-16 04:32:13.090602 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.090607 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.090612 | orchestrator | } 2026-04-16 04:32:13.090616 | orchestrator | 2026-04-16 04:32:13.090621 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.090626 | orchestrator | 2026-04-16 04:32:13.090630 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.090635 | orchestrator | + ip_address = "192.168.16.5" 2026-04-16 04:32:13.090640 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.090644 | orchestrator | } 2026-04-16 04:32:13.090649 | orchestrator | } 2026-04-16 04:32:13.090868 | orchestrator | 2026-04-16 04:32:13.090888 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-04-16 04:32:13.090893 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-16 04:32:13.090898 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.090903 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.090907 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.090912 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.090916 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.090921 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.090926 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.090930 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.090934 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.090939 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.090943 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.090948 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.090952 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.090957 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.090969 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.090974 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.090978 | orchestrator | 2026-04-16 04:32:13.090983 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.090987 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-16 04:32:13.090992 | orchestrator | } 2026-04-16 04:32:13.090996 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091001 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.091006 | orchestrator | } 2026-04-16 04:32:13.091010 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091015 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-16 04:32:13.091019 | orchestrator | } 2026-04-16 04:32:13.091024 | orchestrator | 2026-04-16 04:32:13.091028 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.091033 | orchestrator | 2026-04-16 04:32:13.091037 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.091042 | orchestrator | + ip_address = "192.168.16.10" 2026-04-16 04:32:13.091046 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.091051 | orchestrator | } 2026-04-16 04:32:13.091056 | orchestrator | } 2026-04-16 04:32:13.091244 | orchestrator | 2026-04-16 04:32:13.091260 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-04-16 04:32:13.091265 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-16 04:32:13.091274 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.091279 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.091284 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.091288 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.091293 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.091297 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.091302 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.091306 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.091311 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.091315 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.091320 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.091324 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.091329 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.091333 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.091338 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.091342 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.091347 | orchestrator | 2026-04-16 04:32:13.091351 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091356 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-16 04:32:13.091361 | orchestrator | } 2026-04-16 04:32:13.091365 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091370 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.091374 | orchestrator | } 2026-04-16 04:32:13.091379 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091383 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-16 04:32:13.091388 | orchestrator | } 2026-04-16 04:32:13.091392 | orchestrator | 2026-04-16 04:32:13.091397 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.091402 | orchestrator | 2026-04-16 04:32:13.091406 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.091411 | orchestrator | + ip_address = "192.168.16.11" 2026-04-16 04:32:13.091415 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.091420 | orchestrator | } 2026-04-16 04:32:13.091424 | orchestrator | } 2026-04-16 04:32:13.091602 | orchestrator | 2026-04-16 04:32:13.091618 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-04-16 04:32:13.091623 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-16 04:32:13.091627 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.091632 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.091637 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.091642 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.091690 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.091696 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.091700 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.091705 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.091709 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.091714 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.091718 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.091723 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.091727 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.091732 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.091736 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.091741 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.091746 | orchestrator | 2026-04-16 04:32:13.091750 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091755 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-16 04:32:13.091759 | orchestrator | } 2026-04-16 04:32:13.091764 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091768 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.091773 | orchestrator | } 2026-04-16 04:32:13.091777 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.091782 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-16 04:32:13.091786 | orchestrator | } 2026-04-16 04:32:13.091791 | orchestrator | 2026-04-16 04:32:13.091795 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.091800 | orchestrator | 2026-04-16 04:32:13.091805 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.091809 | orchestrator | + ip_address = "192.168.16.12" 2026-04-16 04:32:13.091814 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.091818 | orchestrator | } 2026-04-16 04:32:13.091823 | orchestrator | } 2026-04-16 04:32:13.092011 | orchestrator | 2026-04-16 04:32:13.092024 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-04-16 04:32:13.092029 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-16 04:32:13.092033 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.092038 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.092042 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.092046 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.092050 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.092054 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.092058 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.092062 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.092066 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.092070 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.092075 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.092079 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.092083 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.092087 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.092091 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.092095 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.092100 | orchestrator | 2026-04-16 04:32:13.092104 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092108 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-16 04:32:13.092112 | orchestrator | } 2026-04-16 04:32:13.092116 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092121 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.092125 | orchestrator | } 2026-04-16 04:32:13.092129 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092133 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-16 04:32:13.092137 | orchestrator | } 2026-04-16 04:32:13.092141 | orchestrator | 2026-04-16 04:32:13.092149 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.092154 | orchestrator | 2026-04-16 04:32:13.092158 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.092162 | orchestrator | + ip_address = "192.168.16.13" 2026-04-16 04:32:13.092166 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.092170 | orchestrator | } 2026-04-16 04:32:13.092174 | orchestrator | } 2026-04-16 04:32:13.092326 | orchestrator | 2026-04-16 04:32:13.092339 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-04-16 04:32:13.092344 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-16 04:32:13.092348 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.092352 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.092356 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.092360 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.092365 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.092369 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.092373 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.092377 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.092384 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.092389 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.092393 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.092397 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.092401 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.092405 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.092409 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.092413 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.092419 | orchestrator | 2026-04-16 04:32:13.092423 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092430 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-16 04:32:13.092434 | orchestrator | } 2026-04-16 04:32:13.092438 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092442 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.092446 | orchestrator | } 2026-04-16 04:32:13.092450 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092454 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-16 04:32:13.092459 | orchestrator | } 2026-04-16 04:32:13.092463 | orchestrator | 2026-04-16 04:32:13.092467 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.092471 | orchestrator | 2026-04-16 04:32:13.092475 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.092479 | orchestrator | + ip_address = "192.168.16.14" 2026-04-16 04:32:13.092483 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.092488 | orchestrator | } 2026-04-16 04:32:13.092492 | orchestrator | } 2026-04-16 04:32:13.092711 | orchestrator | 2026-04-16 04:32:13.092730 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-04-16 04:32:13.092735 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-04-16 04:32:13.092739 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.092744 | orchestrator | + all_fixed_ips = (known after apply) 2026-04-16 04:32:13.092748 | orchestrator | + all_security_group_ids = (known after apply) 2026-04-16 04:32:13.092752 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.092756 | orchestrator | + device_id = (known after apply) 2026-04-16 04:32:13.092760 | orchestrator | + device_owner = (known after apply) 2026-04-16 04:32:13.092764 | orchestrator | + dns_assignment = (known after apply) 2026-04-16 04:32:13.092768 | orchestrator | + dns_name = (known after apply) 2026-04-16 04:32:13.092773 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.092777 | orchestrator | + mac_address = (known after apply) 2026-04-16 04:32:13.092781 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.092785 | orchestrator | + port_security_enabled = (known after apply) 2026-04-16 04:32:13.092789 | orchestrator | + qos_policy_id = (known after apply) 2026-04-16 04:32:13.092798 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.092802 | orchestrator | + security_group_ids = (known after apply) 2026-04-16 04:32:13.092807 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.092811 | orchestrator | 2026-04-16 04:32:13.092815 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092819 | orchestrator | + ip_address = "192.168.16.254/32" 2026-04-16 04:32:13.092823 | orchestrator | } 2026-04-16 04:32:13.092827 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092832 | orchestrator | + ip_address = "192.168.16.8/32" 2026-04-16 04:32:13.092836 | orchestrator | } 2026-04-16 04:32:13.092840 | orchestrator | + allowed_address_pairs { 2026-04-16 04:32:13.092844 | orchestrator | + ip_address = "192.168.16.9/32" 2026-04-16 04:32:13.092848 | orchestrator | } 2026-04-16 04:32:13.092852 | orchestrator | 2026-04-16 04:32:13.092856 | orchestrator | + binding (known after apply) 2026-04-16 04:32:13.092860 | orchestrator | 2026-04-16 04:32:13.092864 | orchestrator | + fixed_ip { 2026-04-16 04:32:13.092869 | orchestrator | + ip_address = "192.168.16.15" 2026-04-16 04:32:13.092873 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.092877 | orchestrator | } 2026-04-16 04:32:13.092881 | orchestrator | } 2026-04-16 04:32:13.092937 | orchestrator | 2026-04-16 04:32:13.092949 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-04-16 04:32:13.092953 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-04-16 04:32:13.092957 | orchestrator | + force_destroy = false 2026-04-16 04:32:13.092962 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.092966 | orchestrator | + port_id = (known after apply) 2026-04-16 04:32:13.092970 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.092974 | orchestrator | + router_id = (known after apply) 2026-04-16 04:32:13.092978 | orchestrator | + subnet_id = (known after apply) 2026-04-16 04:32:13.092982 | orchestrator | } 2026-04-16 04:32:13.093075 | orchestrator | 2026-04-16 04:32:13.093088 | orchestrator | # openstack_networking_router_v2.router will be created 2026-04-16 04:32:13.093093 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-04-16 04:32:13.093097 | orchestrator | + admin_state_up = (known after apply) 2026-04-16 04:32:13.093101 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.093105 | orchestrator | + availability_zone_hints = [ 2026-04-16 04:32:13.093109 | orchestrator | + "nova", 2026-04-16 04:32:13.093114 | orchestrator | ] 2026-04-16 04:32:13.093118 | orchestrator | + distributed = (known after apply) 2026-04-16 04:32:13.093122 | orchestrator | + enable_snat = (known after apply) 2026-04-16 04:32:13.093126 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-04-16 04:32:13.093130 | orchestrator | + external_qos_policy_id = (known after apply) 2026-04-16 04:32:13.093134 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.093139 | orchestrator | + name = "testbed" 2026-04-16 04:32:13.093143 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.093147 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.093151 | orchestrator | 2026-04-16 04:32:13.093155 | orchestrator | + external_fixed_ip (known after apply) 2026-04-16 04:32:13.093159 | orchestrator | } 2026-04-16 04:32:13.093253 | orchestrator | 2026-04-16 04:32:13.093266 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-04-16 04:32:13.093272 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-04-16 04:32:13.093276 | orchestrator | + description = "ssh" 2026-04-16 04:32:13.093280 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.093284 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.093288 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.093293 | orchestrator | + port_range_max = 22 2026-04-16 04:32:13.093297 | orchestrator | + port_range_min = 22 2026-04-16 04:32:13.093301 | orchestrator | + protocol = "tcp" 2026-04-16 04:32:13.093305 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.093313 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.093318 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.093322 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.093326 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.093330 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.093334 | orchestrator | } 2026-04-16 04:32:13.093428 | orchestrator | 2026-04-16 04:32:13.093440 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-04-16 04:32:13.093445 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-04-16 04:32:13.093449 | orchestrator | + description = "wireguard" 2026-04-16 04:32:13.093453 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.093457 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.093462 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.093466 | orchestrator | + port_range_max = 51820 2026-04-16 04:32:13.093470 | orchestrator | + port_range_min = 51820 2026-04-16 04:32:13.093474 | orchestrator | + protocol = "udp" 2026-04-16 04:32:13.093478 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.093482 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.093486 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.093491 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.093495 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.093499 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.093503 | orchestrator | } 2026-04-16 04:32:13.093579 | orchestrator | 2026-04-16 04:32:13.093592 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-04-16 04:32:13.093597 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-04-16 04:32:13.093604 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.093608 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.093612 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.093616 | orchestrator | + protocol = "tcp" 2026-04-16 04:32:13.093621 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.093625 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.093629 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.093633 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-16 04:32:13.093637 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.093642 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.093646 | orchestrator | } 2026-04-16 04:32:13.093738 | orchestrator | 2026-04-16 04:32:13.093751 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-04-16 04:32:13.093756 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-04-16 04:32:13.093760 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.093764 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.093768 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.093772 | orchestrator | + protocol = "udp" 2026-04-16 04:32:13.093776 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.093780 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.093784 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.093788 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-04-16 04:32:13.093793 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.093797 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.093801 | orchestrator | } 2026-04-16 04:32:13.093880 | orchestrator | 2026-04-16 04:32:13.093894 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-04-16 04:32:13.093903 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-04-16 04:32:13.093908 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.093912 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.093916 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.093920 | orchestrator | + protocol = "icmp" 2026-04-16 04:32:13.093924 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.093928 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.093933 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.093937 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.093941 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.093945 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.093949 | orchestrator | } 2026-04-16 04:32:13.094041 | orchestrator | 2026-04-16 04:32:13.094056 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-04-16 04:32:13.094060 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-04-16 04:32:13.094065 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.094069 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.094073 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.094077 | orchestrator | + protocol = "tcp" 2026-04-16 04:32:13.094082 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.094086 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.094090 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.094094 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.094098 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.094102 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.094106 | orchestrator | } 2026-04-16 04:32:13.094184 | orchestrator | 2026-04-16 04:32:13.094197 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-04-16 04:32:13.094202 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-04-16 04:32:13.094206 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.094210 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.094214 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.094218 | orchestrator | + protocol = "udp" 2026-04-16 04:32:13.094222 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.094227 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.094231 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.094235 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.094239 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.094243 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.094247 | orchestrator | } 2026-04-16 04:32:13.094322 | orchestrator | 2026-04-16 04:32:13.094335 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-04-16 04:32:13.094340 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-04-16 04:32:13.094344 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.094348 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.094352 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.094356 | orchestrator | + protocol = "icmp" 2026-04-16 04:32:13.094360 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.094364 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.094368 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.094372 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.094377 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.094381 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.094389 | orchestrator | } 2026-04-16 04:32:13.094460 | orchestrator | 2026-04-16 04:32:13.094472 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-04-16 04:32:13.094477 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-04-16 04:32:13.094482 | orchestrator | + description = "vrrp" 2026-04-16 04:32:13.094486 | orchestrator | + direction = "ingress" 2026-04-16 04:32:13.094490 | orchestrator | + ethertype = "IPv4" 2026-04-16 04:32:13.094494 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.094498 | orchestrator | + protocol = "112" 2026-04-16 04:32:13.094502 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.094507 | orchestrator | + remote_address_group_id = (known after apply) 2026-04-16 04:32:13.094511 | orchestrator | + remote_group_id = (known after apply) 2026-04-16 04:32:13.094515 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-04-16 04:32:13.094519 | orchestrator | + security_group_id = (known after apply) 2026-04-16 04:32:13.094523 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.094527 | orchestrator | } 2026-04-16 04:32:13.094580 | orchestrator | 2026-04-16 04:32:13.094593 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-04-16 04:32:13.094597 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-04-16 04:32:13.094602 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.094606 | orchestrator | + description = "management security group" 2026-04-16 04:32:13.094610 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.094614 | orchestrator | + name = "testbed-management" 2026-04-16 04:32:13.094618 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.094622 | orchestrator | + stateful = (known after apply) 2026-04-16 04:32:13.094626 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.094630 | orchestrator | } 2026-04-16 04:32:13.094962 | orchestrator | 2026-04-16 04:32:13.094983 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-04-16 04:32:13.094988 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-04-16 04:32:13.094992 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.094996 | orchestrator | + description = "node security group" 2026-04-16 04:32:13.095000 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.095003 | orchestrator | + name = "testbed-node" 2026-04-16 04:32:13.095007 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.095011 | orchestrator | + stateful = (known after apply) 2026-04-16 04:32:13.095014 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.095018 | orchestrator | } 2026-04-16 04:32:13.095181 | orchestrator | 2026-04-16 04:32:13.095194 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-04-16 04:32:13.095198 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-04-16 04:32:13.095202 | orchestrator | + all_tags = (known after apply) 2026-04-16 04:32:13.095206 | orchestrator | + cidr = "192.168.16.0/20" 2026-04-16 04:32:13.095210 | orchestrator | + dns_nameservers = [ 2026-04-16 04:32:13.095214 | orchestrator | + "8.8.8.8", 2026-04-16 04:32:13.095218 | orchestrator | + "9.9.9.9", 2026-04-16 04:32:13.095222 | orchestrator | ] 2026-04-16 04:32:13.095226 | orchestrator | + enable_dhcp = true 2026-04-16 04:32:13.095230 | orchestrator | + gateway_ip = (known after apply) 2026-04-16 04:32:13.095238 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.095242 | orchestrator | + ip_version = 4 2026-04-16 04:32:13.095245 | orchestrator | + ipv6_address_mode = (known after apply) 2026-04-16 04:32:13.095249 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-04-16 04:32:13.095253 | orchestrator | + name = "subnet-testbed-management" 2026-04-16 04:32:13.095257 | orchestrator | + network_id = (known after apply) 2026-04-16 04:32:13.095260 | orchestrator | + no_gateway = false 2026-04-16 04:32:13.095264 | orchestrator | + region = (known after apply) 2026-04-16 04:32:13.095268 | orchestrator | + service_types = (known after apply) 2026-04-16 04:32:13.095278 | orchestrator | + tenant_id = (known after apply) 2026-04-16 04:32:13.095282 | orchestrator | 2026-04-16 04:32:13.095286 | orchestrator | + allocation_pool { 2026-04-16 04:32:13.095331 | orchestrator | + end = "192.168.31.250" 2026-04-16 04:32:13.095335 | orchestrator | + start = "192.168.31.200" 2026-04-16 04:32:13.095338 | orchestrator | } 2026-04-16 04:32:13.095342 | orchestrator | } 2026-04-16 04:32:13.095388 | orchestrator | 2026-04-16 04:32:13.095400 | orchestrator | # terraform_data.image will be created 2026-04-16 04:32:13.095404 | orchestrator | + resource "terraform_data" "image" { 2026-04-16 04:32:13.095408 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.095412 | orchestrator | + input = "Ubuntu 24.04" 2026-04-16 04:32:13.095415 | orchestrator | + output = (known after apply) 2026-04-16 04:32:13.095419 | orchestrator | } 2026-04-16 04:32:13.095463 | orchestrator | 2026-04-16 04:32:13.095474 | orchestrator | # terraform_data.image_node will be created 2026-04-16 04:32:13.095479 | orchestrator | + resource "terraform_data" "image_node" { 2026-04-16 04:32:13.095483 | orchestrator | + id = (known after apply) 2026-04-16 04:32:13.095486 | orchestrator | + input = "Ubuntu 24.04" 2026-04-16 04:32:13.095490 | orchestrator | + output = (known after apply) 2026-04-16 04:32:13.095494 | orchestrator | } 2026-04-16 04:32:13.095509 | orchestrator | 2026-04-16 04:32:13.095513 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-04-16 04:32:13.095524 | orchestrator | 2026-04-16 04:32:13.095528 | orchestrator | Changes to Outputs: 2026-04-16 04:32:13.095538 | orchestrator | + manager_address = (sensitive value) 2026-04-16 04:32:13.095542 | orchestrator | + private_key = (sensitive value) 2026-04-16 04:32:13.339703 | orchestrator | terraform_data.image_node: Creating... 2026-04-16 04:32:13.340094 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=a752dce2-4bf2-4988-83c3-18c8f2157466] 2026-04-16 04:32:13.340612 | orchestrator | terraform_data.image: Creating... 2026-04-16 04:32:13.340975 | orchestrator | terraform_data.image: Creation complete after 0s [id=7b2c16de-2668-49a8-3bb8-62b0ffc0e1b0] 2026-04-16 04:32:13.364603 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-04-16 04:32:13.375020 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-04-16 04:32:13.375177 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-04-16 04:32:13.376385 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-04-16 04:32:13.376855 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-04-16 04:32:13.377483 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-04-16 04:32:13.377954 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-04-16 04:32:13.378325 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-04-16 04:32:13.380991 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-04-16 04:32:13.387598 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-04-16 04:32:13.810337 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-16 04:32:13.816892 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-04-16 04:32:13.874948 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-04-16 04:32:13.884392 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-04-16 04:32:13.891580 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-04-16 04:32:13.897267 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-04-16 04:32:14.445823 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 0s [id=d9a6bfc7-5755-49a1-af77-16189ea83b05] 2026-04-16 04:32:14.462439 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-04-16 04:32:16.972647 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=ad98f1c3-bcf7-4daa-8620-21ecec1aea13] 2026-04-16 04:32:16.980437 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-04-16 04:32:16.990253 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=2cf3122c-2131-4b44-b1eb-9d24190083bb] 2026-04-16 04:32:17.001550 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=e0a81747-53de-4864-82c1-214d11586042] 2026-04-16 04:32:17.004463 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-04-16 04:32:17.014083 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-04-16 04:32:17.024705 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=6e9659e4-3cc7-4909-ad5f-d807239f86c3] 2026-04-16 04:32:17.027877 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d] 2026-04-16 04:32:17.030246 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-04-16 04:32:17.032645 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-04-16 04:32:17.038326 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=68199fda-8c99-469d-abab-c5a57188e834] 2026-04-16 04:32:17.048727 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=e9d72273-cf2e-45b4-9a8d-8e467f71ab1e] 2026-04-16 04:32:17.048798 | orchestrator | local_file.id_rsa_pub: Creating... 2026-04-16 04:32:17.052593 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=afd2bd2a0a9cb274b1a9dfcab1b91ece22693112] 2026-04-16 04:32:17.054334 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-04-16 04:32:17.059921 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-04-16 04:32:17.063562 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=a2f34037abb2dc1ec1d42f84dcbaf82c2dc137c5] 2026-04-16 04:32:17.068833 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-04-16 04:32:17.096493 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=246d5233-913f-43b5-865e-f11d086eabe3] 2026-04-16 04:32:17.114996 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=5b9c3369-0440-4506-af4c-01bb913afd99] 2026-04-16 04:32:17.796443 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=b594b91e-33b3-4c29-b9e6-3b2f15c3c19e] 2026-04-16 04:32:18.026080 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ce0736ea-dd15-4b0f-86e2-ecfac25827a0] 2026-04-16 04:32:18.032266 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-04-16 04:32:20.402555 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=7032e080-debe-4ddb-9f2d-e4e5a5f8dba8] 2026-04-16 04:32:20.413832 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=6b3387fe-ddff-45b8-a1d5-c29892c481d8] 2026-04-16 04:32:20.432208 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=2c911509-71b2-4fd0-889a-85a88ccb094b] 2026-04-16 04:32:20.435875 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=aeef7ba8-9496-4124-aafb-d41f3a2fc5cd] 2026-04-16 04:32:20.508293 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=4a571ce0-7910-4acd-a84f-c7c407a3a7e5] 2026-04-16 04:32:20.525734 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=375db26a-2184-4380-988d-01ed4e876c64] 2026-04-16 04:32:21.731104 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=457507bd-7877-4e35-8063-934e8b2a8806] 2026-04-16 04:32:21.738644 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-04-16 04:32:21.738850 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-04-16 04:32:21.739984 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-04-16 04:32:21.963265 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=82ae16d6-624f-4121-8578-cf1ee6f85fed] 2026-04-16 04:32:21.973105 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-04-16 04:32:21.977174 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-04-16 04:32:21.977640 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4d4d3378-25ec-4244-91c3-b7587b9efee1] 2026-04-16 04:32:21.977904 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-04-16 04:32:21.978255 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-04-16 04:32:21.988825 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-04-16 04:32:21.989262 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-04-16 04:32:21.989990 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-04-16 04:32:21.991463 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-04-16 04:32:21.992075 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-04-16 04:32:22.201592 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=1ffa45b1-f908-44aa-a99c-431585549e62] 2026-04-16 04:32:22.219133 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-04-16 04:32:22.597887 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=241fde51-8f1e-4832-a1d0-fd117a665f01] 2026-04-16 04:32:22.607202 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-04-16 04:32:22.677541 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=e5d5c799-887a-4f68-9d37-283a95554adb] 2026-04-16 04:32:22.686334 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-04-16 04:32:22.698513 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=2e72b957-5000-450d-9109-792b91250723] 2026-04-16 04:32:22.705321 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-04-16 04:32:22.754198 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=08b8dc7b-93d2-49e0-ad31-b0ed4f7923f8] 2026-04-16 04:32:22.760468 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-04-16 04:32:22.765061 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=ef701fdc-b1da-474d-8fb5-39774e8e8d53] 2026-04-16 04:32:22.770406 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-04-16 04:32:22.791596 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7a6e62f9-2274-48d2-a4da-77c5bdd2f224] 2026-04-16 04:32:22.798221 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-04-16 04:32:22.822253 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=85783b00-10ff-48c4-adf4-e860e28b9415] 2026-04-16 04:32:22.851071 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=b1756d93-3b54-4e44-ab25-94293c191fc9] 2026-04-16 04:32:22.876335 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=26947b69-b148-4c37-958f-ca18a130a8c7] 2026-04-16 04:32:22.936597 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=a11fa104-278a-4cbf-a8a4-27424c1bca89] 2026-04-16 04:32:23.048044 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=0607f8d4-0dfd-48eb-93e6-b0a2f44c4ab9] 2026-04-16 04:32:23.189584 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=d00db4de-991e-4045-91cf-36af7de3a4c3] 2026-04-16 04:32:23.356936 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=7fcf465b-b212-4fad-a502-31b42d315513] 2026-04-16 04:32:23.511559 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=36c437ae-1776-472b-b7c1-b3afcebd63eb] 2026-04-16 04:32:23.747049 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=74d7e519-07a4-4c85-bc50-aef8ccca1b3b] 2026-04-16 04:32:25.237207 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=514466df-9835-4630-bc39-1c0eccb16e85] 2026-04-16 04:32:25.267352 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-04-16 04:32:25.269256 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-04-16 04:32:25.271889 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-04-16 04:32:25.277977 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-04-16 04:32:25.281373 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-04-16 04:32:25.282946 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-04-16 04:32:25.292281 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-04-16 04:32:26.613224 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=81b8f6a3-71f2-4d3d-b3ec-6db74f303e76] 2026-04-16 04:32:26.619095 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-04-16 04:32:26.625011 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-04-16 04:32:26.626979 | orchestrator | local_file.inventory: Creating... 2026-04-16 04:32:26.629740 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=df1a796b9be2a3fe3875cb7c3cab99360bec0582] 2026-04-16 04:32:26.633332 | orchestrator | local_file.inventory: Creation complete after 0s [id=05da58db7ff3adc5273efb3dd158e1f6561b55fe] 2026-04-16 04:32:27.433970 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=81b8f6a3-71f2-4d3d-b3ec-6db74f303e76] 2026-04-16 04:32:35.269938 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-04-16 04:32:35.272066 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-04-16 04:32:35.279471 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-04-16 04:32:35.291772 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-04-16 04:32:35.292951 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-04-16 04:32:35.293934 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-04-16 04:32:45.270968 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-04-16 04:32:45.272908 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-04-16 04:32:45.280272 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-04-16 04:32:45.292080 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-04-16 04:32:45.293112 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-04-16 04:32:45.294281 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-04-16 04:32:45.676820 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=fa404fe9-e3f1-42e7-a1f5-182ceaae5df5] 2026-04-16 04:32:45.680442 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=091f0816-9193-4547-849b-43d5c0c63d40] 2026-04-16 04:32:45.753424 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=76542dcd-40e3-4379-96ea-783e3f130775] 2026-04-16 04:32:55.280000 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-04-16 04:32:55.281131 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-04-16 04:32:55.294663 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-04-16 04:32:55.927343 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=7046a3c2-99c2-48dd-9000-d3fd4d934183] 2026-04-16 04:32:55.975291 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=80a65917-5a9a-49ec-9b2d-80c3fa8a479a] 2026-04-16 04:32:56.005778 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=57864929-8c32-43be-8e5a-f27590f5764a] 2026-04-16 04:32:56.014593 | orchestrator | null_resource.node_semaphore: Creating... 2026-04-16 04:32:56.031714 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-04-16 04:32:56.035692 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=6449518665132072786] 2026-04-16 04:32:56.038163 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-04-16 04:32:56.038519 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-04-16 04:32:56.043170 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-04-16 04:32:56.047197 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-04-16 04:32:56.047813 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-04-16 04:32:56.066251 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-04-16 04:32:56.066963 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-04-16 04:32:56.073239 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-04-16 04:32:56.079761 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-04-16 04:32:59.437538 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=7046a3c2-99c2-48dd-9000-d3fd4d934183/5b9c3369-0440-4506-af4c-01bb913afd99] 2026-04-16 04:32:59.452186 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=091f0816-9193-4547-849b-43d5c0c63d40/246d5233-913f-43b5-865e-f11d086eabe3] 2026-04-16 04:32:59.472687 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=80a65917-5a9a-49ec-9b2d-80c3fa8a479a/2cf3122c-2131-4b44-b1eb-9d24190083bb] 2026-04-16 04:32:59.475870 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=7046a3c2-99c2-48dd-9000-d3fd4d934183/6e9659e4-3cc7-4909-ad5f-d807239f86c3] 2026-04-16 04:32:59.486393 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=091f0816-9193-4547-849b-43d5c0c63d40/e0a81747-53de-4864-82c1-214d11586042] 2026-04-16 04:32:59.507374 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=80a65917-5a9a-49ec-9b2d-80c3fa8a479a/68199fda-8c99-469d-abab-c5a57188e834] 2026-04-16 04:33:05.581306 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=7046a3c2-99c2-48dd-9000-d3fd4d934183/ad98f1c3-bcf7-4daa-8620-21ecec1aea13] 2026-04-16 04:33:05.602516 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=091f0816-9193-4547-849b-43d5c0c63d40/e9d72273-cf2e-45b4-9a8d-8e467f71ab1e] 2026-04-16 04:33:05.638883 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=80a65917-5a9a-49ec-9b2d-80c3fa8a479a/9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d] 2026-04-16 04:33:06.079971 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-04-16 04:33:16.080584 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-04-16 04:33:16.497749 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=b3f34ff7-2310-426d-ac10-7e8203bbc5df] 2026-04-16 04:33:16.527212 | orchestrator | 2026-04-16 04:33:16.528785 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-04-16 04:33:16.528829 | orchestrator | 2026-04-16 04:33:16.528845 | orchestrator | Outputs: 2026-04-16 04:33:16.528854 | orchestrator | 2026-04-16 04:33:16.529124 | orchestrator | manager_address = 2026-04-16 04:33:16.529165 | orchestrator | private_key = 2026-04-16 04:33:16.672355 | orchestrator | ok: Runtime: 0:01:09.475408 2026-04-16 04:33:16.703358 | 2026-04-16 04:33:16.703479 | TASK [Fetch manager address] 2026-04-16 04:33:17.161217 | orchestrator | ok 2026-04-16 04:33:17.172343 | 2026-04-16 04:33:17.172498 | TASK [Set manager_host address] 2026-04-16 04:33:17.267382 | orchestrator | ok 2026-04-16 04:33:17.286592 | 2026-04-16 04:33:17.286947 | LOOP [Update ansible collections] 2026-04-16 04:33:19.142405 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-16 04:33:19.142813 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-16 04:33:19.142951 | orchestrator | Starting galaxy collection install process 2026-04-16 04:33:19.143010 | orchestrator | Process install dependency map 2026-04-16 04:33:19.143069 | orchestrator | Starting collection install process 2026-04-16 04:33:19.143121 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-04-16 04:33:19.143160 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-04-16 04:33:19.143200 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-04-16 04:33:19.143276 | orchestrator | ok: Item: commons Runtime: 0:00:01.492025 2026-04-16 04:33:20.128507 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-04-16 04:33:20.128688 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-16 04:33:20.128842 | orchestrator | Starting galaxy collection install process 2026-04-16 04:33:20.128898 | orchestrator | Process install dependency map 2026-04-16 04:33:20.128945 | orchestrator | Starting collection install process 2026-04-16 04:33:20.128994 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-04-16 04:33:20.129039 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-04-16 04:33:20.129083 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-16 04:33:20.129146 | orchestrator | ok: Item: services Runtime: 0:00:00.706819 2026-04-16 04:33:20.154525 | 2026-04-16 04:33:20.154716 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-16 04:33:30.754616 | orchestrator | ok 2026-04-16 04:33:30.764878 | 2026-04-16 04:33:30.765013 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-16 04:34:30.812736 | orchestrator | ok 2026-04-16 04:34:30.823434 | 2026-04-16 04:34:30.823558 | TASK [Fetch manager ssh hostkey] 2026-04-16 04:34:32.398973 | orchestrator | Output suppressed because no_log was given 2026-04-16 04:34:32.414535 | 2026-04-16 04:34:32.414784 | TASK [Get ssh keypair from terraform environment] 2026-04-16 04:34:32.952202 | orchestrator | ok: Runtime: 0:00:00.011158 2026-04-16 04:34:32.969485 | 2026-04-16 04:34:32.969647 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-16 04:34:33.021280 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-04-16 04:34:33.031797 | 2026-04-16 04:34:33.031941 | TASK [Run manager part 0] 2026-04-16 04:34:33.988460 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-16 04:34:34.076516 | orchestrator | 2026-04-16 04:34:34.076589 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-04-16 04:34:34.076600 | orchestrator | 2026-04-16 04:34:34.076617 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-04-16 04:34:36.186622 | orchestrator | ok: [testbed-manager] 2026-04-16 04:34:36.186689 | orchestrator | 2026-04-16 04:34:36.186722 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-16 04:34:36.186737 | orchestrator | 2026-04-16 04:34:36.186749 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:34:38.243505 | orchestrator | ok: [testbed-manager] 2026-04-16 04:34:38.243565 | orchestrator | 2026-04-16 04:34:38.243576 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-16 04:34:38.963877 | orchestrator | ok: [testbed-manager] 2026-04-16 04:34:38.963968 | orchestrator | 2026-04-16 04:34:38.963979 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-16 04:34:39.028031 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:34:39.028102 | orchestrator | 2026-04-16 04:34:39.028124 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-04-16 04:34:39.070726 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:34:39.070796 | orchestrator | 2026-04-16 04:34:39.070816 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-04-16 04:34:39.104552 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:34:39.104603 | orchestrator | 2026-04-16 04:34:39.104614 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-04-16 04:34:39.923853 | orchestrator | changed: [testbed-manager] 2026-04-16 04:34:39.923924 | orchestrator | 2026-04-16 04:34:39.923935 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-04-16 04:37:45.198007 | orchestrator | changed: [testbed-manager] 2026-04-16 04:37:45.198142 | orchestrator | 2026-04-16 04:37:45.198162 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-16 04:40:25.626996 | orchestrator | changed: [testbed-manager] 2026-04-16 04:40:25.627122 | orchestrator | 2026-04-16 04:40:25.627144 | orchestrator | TASK [Install required packages] *********************************************** 2026-04-16 04:40:51.980361 | orchestrator | changed: [testbed-manager] 2026-04-16 04:40:51.980485 | orchestrator | 2026-04-16 04:40:51.980507 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-04-16 04:41:01.070458 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:01.070540 | orchestrator | 2026-04-16 04:41:01.070551 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-16 04:41:01.113849 | orchestrator | ok: [testbed-manager] 2026-04-16 04:41:01.113943 | orchestrator | 2026-04-16 04:41:01.113964 | orchestrator | TASK [Get current user] ******************************************************** 2026-04-16 04:41:01.923005 | orchestrator | ok: [testbed-manager] 2026-04-16 04:41:01.923125 | orchestrator | 2026-04-16 04:41:01.923153 | orchestrator | TASK [Create venv directory] *************************************************** 2026-04-16 04:41:02.697515 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:02.697589 | orchestrator | 2026-04-16 04:41:02.697602 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-04-16 04:41:08.712382 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:08.712527 | orchestrator | 2026-04-16 04:41:08.712557 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-04-16 04:41:14.223162 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:14.223200 | orchestrator | 2026-04-16 04:41:14.223208 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-04-16 04:41:16.925883 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:16.925935 | orchestrator | 2026-04-16 04:41:16.925948 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-04-16 04:41:18.725208 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:18.725253 | orchestrator | 2026-04-16 04:41:18.725261 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-04-16 04:41:19.813234 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-16 04:41:19.813319 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-16 04:41:19.813329 | orchestrator | 2026-04-16 04:41:19.813338 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-04-16 04:41:19.856614 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-16 04:41:19.856680 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-16 04:41:19.856689 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-16 04:41:19.856697 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-16 04:41:26.475754 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-04-16 04:41:26.475796 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-04-16 04:41:26.475802 | orchestrator | 2026-04-16 04:41:26.475808 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-04-16 04:41:27.035492 | orchestrator | changed: [testbed-manager] 2026-04-16 04:41:27.035546 | orchestrator | 2026-04-16 04:41:27.035557 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-04-16 04:43:46.832054 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-04-16 04:43:46.832177 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-04-16 04:43:46.832197 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-04-16 04:43:46.832209 | orchestrator | 2026-04-16 04:43:46.832221 | orchestrator | TASK [Install local collections] *********************************************** 2026-04-16 04:43:49.207641 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-04-16 04:43:49.207681 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-04-16 04:43:49.207688 | orchestrator | 2026-04-16 04:43:49.207695 | orchestrator | PLAY [Create operator user] **************************************************** 2026-04-16 04:43:49.207702 | orchestrator | 2026-04-16 04:43:49.207707 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:43:50.649950 | orchestrator | ok: [testbed-manager] 2026-04-16 04:43:50.650119 | orchestrator | 2026-04-16 04:43:50.650150 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-16 04:43:50.695323 | orchestrator | ok: [testbed-manager] 2026-04-16 04:43:50.695489 | orchestrator | 2026-04-16 04:43:50.695519 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-16 04:43:50.764467 | orchestrator | ok: [testbed-manager] 2026-04-16 04:43:50.764570 | orchestrator | 2026-04-16 04:43:50.764584 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-16 04:43:51.587698 | orchestrator | changed: [testbed-manager] 2026-04-16 04:43:51.587755 | orchestrator | 2026-04-16 04:43:51.587769 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-16 04:43:52.331713 | orchestrator | changed: [testbed-manager] 2026-04-16 04:43:52.331817 | orchestrator | 2026-04-16 04:43:52.331834 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-16 04:43:53.723069 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-04-16 04:43:53.723108 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-04-16 04:43:53.723114 | orchestrator | 2026-04-16 04:43:53.723119 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-16 04:43:55.063453 | orchestrator | changed: [testbed-manager] 2026-04-16 04:43:55.063499 | orchestrator | 2026-04-16 04:43:55.063505 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-16 04:43:56.868473 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:43:56.868518 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-04-16 04:43:56.868532 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:43:56.868538 | orchestrator | 2026-04-16 04:43:56.868545 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-16 04:43:56.928862 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:56.928901 | orchestrator | 2026-04-16 04:43:56.928907 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-16 04:43:57.007401 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:57.007450 | orchestrator | 2026-04-16 04:43:57.007461 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-16 04:43:57.554504 | orchestrator | changed: [testbed-manager] 2026-04-16 04:43:57.554544 | orchestrator | 2026-04-16 04:43:57.554551 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-16 04:43:57.616180 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:57.616244 | orchestrator | 2026-04-16 04:43:57.616252 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-16 04:43:58.466515 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-16 04:43:58.468744 | orchestrator | changed: [testbed-manager] 2026-04-16 04:43:58.470133 | orchestrator | 2026-04-16 04:43:58.472191 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-16 04:43:58.490681 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:58.490748 | orchestrator | 2026-04-16 04:43:58.490757 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-16 04:43:58.527507 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:58.527580 | orchestrator | 2026-04-16 04:43:58.527593 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-16 04:43:58.564503 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:58.564572 | orchestrator | 2026-04-16 04:43:58.564580 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-16 04:43:58.634550 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:43:58.634645 | orchestrator | 2026-04-16 04:43:58.634664 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-16 04:43:59.413009 | orchestrator | ok: [testbed-manager] 2026-04-16 04:43:59.413058 | orchestrator | 2026-04-16 04:43:59.413066 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-04-16 04:43:59.413071 | orchestrator | 2026-04-16 04:43:59.413077 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:44:00.859606 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:00.859670 | orchestrator | 2026-04-16 04:44:00.859677 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-04-16 04:44:01.880553 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:01.880593 | orchestrator | 2026-04-16 04:44:01.880600 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:44:01.880606 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 2026-04-16 04:44:01.880610 | orchestrator | 2026-04-16 04:44:02.452329 | orchestrator | ok: Runtime: 0:09:28.660841 2026-04-16 04:44:02.469623 | 2026-04-16 04:44:02.469788 | TASK [Point out that the log in on the manager is now possible] 2026-04-16 04:44:02.503317 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-04-16 04:44:02.513009 | 2026-04-16 04:44:02.513150 | TASK [Point out that the following task takes some time and does not give any output] 2026-04-16 04:44:02.556816 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-04-16 04:44:02.567130 | 2026-04-16 04:44:02.567282 | TASK [Run manager part 1 + 2] 2026-04-16 04:44:03.374085 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-04-16 04:44:03.436071 | orchestrator | 2026-04-16 04:44:03.436146 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-04-16 04:44:03.436160 | orchestrator | 2026-04-16 04:44:03.436181 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:44:06.309534 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:06.309593 | orchestrator | 2026-04-16 04:44:06.309618 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-04-16 04:44:06.349043 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:44:06.349077 | orchestrator | 2026-04-16 04:44:06.349085 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-04-16 04:44:06.403896 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:06.403943 | orchestrator | 2026-04-16 04:44:06.403950 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-16 04:44:06.458305 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:06.458329 | orchestrator | 2026-04-16 04:44:06.458335 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-16 04:44:06.524024 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:06.524054 | orchestrator | 2026-04-16 04:44:06.524060 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-16 04:44:06.581583 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:06.581632 | orchestrator | 2026-04-16 04:44:06.581639 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-16 04:44:06.634377 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-04-16 04:44:06.634437 | orchestrator | 2026-04-16 04:44:06.634443 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-16 04:44:07.344690 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:07.344745 | orchestrator | 2026-04-16 04:44:07.344754 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-16 04:44:07.384151 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:44:07.384179 | orchestrator | 2026-04-16 04:44:07.384185 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-16 04:44:08.761129 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:08.761190 | orchestrator | 2026-04-16 04:44:08.761200 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-16 04:44:09.331332 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:09.331386 | orchestrator | 2026-04-16 04:44:09.331419 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-16 04:44:10.553583 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:10.553654 | orchestrator | 2026-04-16 04:44:10.553671 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-16 04:44:26.190880 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:26.190926 | orchestrator | 2026-04-16 04:44:26.190933 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-04-16 04:44:26.854525 | orchestrator | ok: [testbed-manager] 2026-04-16 04:44:26.854576 | orchestrator | 2026-04-16 04:44:26.854585 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-04-16 04:44:26.897373 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:44:26.897443 | orchestrator | 2026-04-16 04:44:26.897450 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-04-16 04:44:27.809974 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:27.810039 | orchestrator | 2026-04-16 04:44:27.810046 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-04-16 04:44:28.721876 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:28.721941 | orchestrator | 2026-04-16 04:44:28.721947 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-04-16 04:44:29.315063 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:29.315166 | orchestrator | 2026-04-16 04:44:29.315182 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-04-16 04:44:29.358268 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-04-16 04:44:29.358419 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-04-16 04:44:29.358436 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-04-16 04:44:29.358448 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-04-16 04:44:31.343589 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:31.343687 | orchestrator | 2026-04-16 04:44:31.343704 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-04-16 04:44:40.001032 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-04-16 04:44:40.001184 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-04-16 04:44:40.001199 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-04-16 04:44:40.001208 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-04-16 04:44:40.001224 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-04-16 04:44:40.001232 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-04-16 04:44:40.001239 | orchestrator | 2026-04-16 04:44:40.001248 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-04-16 04:44:41.029239 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:41.029314 | orchestrator | 2026-04-16 04:44:41.029327 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-04-16 04:44:44.182330 | orchestrator | changed: [testbed-manager] 2026-04-16 04:44:44.182381 | orchestrator | 2026-04-16 04:44:44.182392 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-04-16 04:44:44.226212 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:44:44.226302 | orchestrator | 2026-04-16 04:44:44.226315 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-04-16 04:46:23.221003 | orchestrator | changed: [testbed-manager] 2026-04-16 04:46:23.221050 | orchestrator | 2026-04-16 04:46:23.221057 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-16 04:46:24.333059 | orchestrator | ok: [testbed-manager] 2026-04-16 04:46:24.333136 | orchestrator | 2026-04-16 04:46:24.333151 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:46:24.333161 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 2026-04-16 04:46:24.333169 | orchestrator | 2026-04-16 04:46:24.730582 | orchestrator | ok: Runtime: 0:02:21.570410 2026-04-16 04:46:24.750059 | 2026-04-16 04:46:24.750225 | TASK [Reboot manager] 2026-04-16 04:46:26.291673 | orchestrator | ok: Runtime: 0:00:00.966651 2026-04-16 04:46:26.307808 | 2026-04-16 04:46:26.307963 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-04-16 04:46:40.248904 | orchestrator | ok 2026-04-16 04:46:40.258050 | 2026-04-16 04:46:40.258187 | TASK [Wait a little longer for the manager so that everything is ready] 2026-04-16 04:47:40.304470 | orchestrator | ok 2026-04-16 04:47:40.314881 | 2026-04-16 04:47:40.315022 | TASK [Deploy manager + bootstrap nodes] 2026-04-16 04:47:42.666757 | orchestrator | 2026-04-16 04:47:42.667034 | orchestrator | # DEPLOY MANAGER 2026-04-16 04:47:42.667064 | orchestrator | 2026-04-16 04:47:42.667080 | orchestrator | + set -e 2026-04-16 04:47:42.667093 | orchestrator | + echo 2026-04-16 04:47:42.667107 | orchestrator | + echo '# DEPLOY MANAGER' 2026-04-16 04:47:42.667125 | orchestrator | + echo 2026-04-16 04:47:42.667176 | orchestrator | + cat /opt/manager-vars.sh 2026-04-16 04:47:42.670241 | orchestrator | export NUMBER_OF_NODES=6 2026-04-16 04:47:42.670329 | orchestrator | 2026-04-16 04:47:42.670346 | orchestrator | export CEPH_VERSION=reef 2026-04-16 04:47:42.670361 | orchestrator | export CONFIGURATION_VERSION=main 2026-04-16 04:47:42.670374 | orchestrator | export MANAGER_VERSION=9.5.0 2026-04-16 04:47:42.670402 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-04-16 04:47:42.670414 | orchestrator | 2026-04-16 04:47:42.670448 | orchestrator | export ARA=false 2026-04-16 04:47:42.670460 | orchestrator | export DEPLOY_MODE=manager 2026-04-16 04:47:42.670478 | orchestrator | export TEMPEST=false 2026-04-16 04:47:42.670512 | orchestrator | export IS_ZUUL=true 2026-04-16 04:47:42.670523 | orchestrator | 2026-04-16 04:47:42.670542 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 04:47:42.670554 | orchestrator | export EXTERNAL_API=false 2026-04-16 04:47:42.670565 | orchestrator | 2026-04-16 04:47:42.670576 | orchestrator | export IMAGE_USER=ubuntu 2026-04-16 04:47:42.670590 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-04-16 04:47:42.670601 | orchestrator | 2026-04-16 04:47:42.670612 | orchestrator | export CEPH_STACK=ceph-ansible 2026-04-16 04:47:42.670635 | orchestrator | 2026-04-16 04:47:42.670647 | orchestrator | + echo 2026-04-16 04:47:42.670660 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 04:47:42.671212 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 04:47:42.671239 | orchestrator | ++ INTERACTIVE=false 2026-04-16 04:47:42.671252 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 04:47:42.671263 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 04:47:42.671385 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 04:47:42.671401 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 04:47:42.671412 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 04:47:42.671423 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 04:47:42.671434 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 04:47:42.671560 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 04:47:42.671576 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 04:47:42.671588 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 04:47:42.671598 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 04:47:42.671609 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 04:47:42.671633 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 04:47:42.671644 | orchestrator | ++ export ARA=false 2026-04-16 04:47:42.671655 | orchestrator | ++ ARA=false 2026-04-16 04:47:42.671667 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 04:47:42.671677 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 04:47:42.671688 | orchestrator | ++ export TEMPEST=false 2026-04-16 04:47:42.671699 | orchestrator | ++ TEMPEST=false 2026-04-16 04:47:42.671709 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 04:47:42.671720 | orchestrator | ++ IS_ZUUL=true 2026-04-16 04:47:42.671731 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 04:47:42.671742 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 04:47:42.671753 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 04:47:42.671764 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 04:47:42.671779 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 04:47:42.671790 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 04:47:42.671801 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 04:47:42.671812 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 04:47:42.671823 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 04:47:42.671834 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 04:47:42.671845 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-04-16 04:47:42.722163 | orchestrator | + docker version 2026-04-16 04:47:42.825998 | orchestrator | Client: Docker Engine - Community 2026-04-16 04:47:42.826167 | orchestrator | Version: 27.5.1 2026-04-16 04:47:42.826187 | orchestrator | API version: 1.47 2026-04-16 04:47:42.826199 | orchestrator | Go version: go1.22.11 2026-04-16 04:47:42.826210 | orchestrator | Git commit: 9f9e405 2026-04-16 04:47:42.826222 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-16 04:47:42.826234 | orchestrator | OS/Arch: linux/amd64 2026-04-16 04:47:42.826245 | orchestrator | Context: default 2026-04-16 04:47:42.826256 | orchestrator | 2026-04-16 04:47:42.826268 | orchestrator | Server: Docker Engine - Community 2026-04-16 04:47:42.826279 | orchestrator | Engine: 2026-04-16 04:47:42.826290 | orchestrator | Version: 27.5.1 2026-04-16 04:47:42.826301 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-04-16 04:47:42.826346 | orchestrator | Go version: go1.22.11 2026-04-16 04:47:42.826359 | orchestrator | Git commit: 4c9b3b0 2026-04-16 04:47:42.826370 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-04-16 04:47:42.826381 | orchestrator | OS/Arch: linux/amd64 2026-04-16 04:47:42.826392 | orchestrator | Experimental: false 2026-04-16 04:47:42.826403 | orchestrator | containerd: 2026-04-16 04:47:42.826414 | orchestrator | Version: v2.2.3 2026-04-16 04:47:42.826425 | orchestrator | GitCommit: 77c84241c7cbdd9b4eca2591793e3d4f4317c590 2026-04-16 04:47:42.826436 | orchestrator | runc: 2026-04-16 04:47:42.826448 | orchestrator | Version: 1.3.5 2026-04-16 04:47:42.826458 | orchestrator | GitCommit: v1.3.5-0-g488fc13e 2026-04-16 04:47:42.826469 | orchestrator | docker-init: 2026-04-16 04:47:42.826480 | orchestrator | Version: 0.19.0 2026-04-16 04:47:42.826535 | orchestrator | GitCommit: de40ad0 2026-04-16 04:47:42.828442 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-04-16 04:47:42.836946 | orchestrator | + set -e 2026-04-16 04:47:42.836990 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 04:47:42.837002 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 04:47:42.837013 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 04:47:42.837024 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 04:47:42.837034 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 04:47:42.837045 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 04:47:42.837057 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 04:47:42.837068 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 04:47:42.837079 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 04:47:42.837090 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 04:47:42.837101 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 04:47:42.837112 | orchestrator | ++ export ARA=false 2026-04-16 04:47:42.837123 | orchestrator | ++ ARA=false 2026-04-16 04:47:42.837134 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 04:47:42.837145 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 04:47:42.837156 | orchestrator | ++ export TEMPEST=false 2026-04-16 04:47:42.837166 | orchestrator | ++ TEMPEST=false 2026-04-16 04:47:42.837177 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 04:47:42.837188 | orchestrator | ++ IS_ZUUL=true 2026-04-16 04:47:42.837199 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 04:47:42.837209 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 04:47:42.837220 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 04:47:42.837231 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 04:47:42.837242 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 04:47:42.837252 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 04:47:42.837264 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 04:47:42.837275 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 04:47:42.837292 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 04:47:42.837303 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 04:47:42.837314 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 04:47:42.837324 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 04:47:42.837335 | orchestrator | ++ INTERACTIVE=false 2026-04-16 04:47:42.837346 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 04:47:42.837362 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 04:47:42.837372 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-16 04:47:42.837384 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-04-16 04:47:42.843465 | orchestrator | + set -e 2026-04-16 04:47:42.843532 | orchestrator | + VERSION=9.5.0 2026-04-16 04:47:42.843546 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-16 04:47:42.849931 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-16 04:47:42.849962 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-16 04:47:42.854011 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-16 04:47:42.858064 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-16 04:47:42.865449 | orchestrator | /opt/configuration ~ 2026-04-16 04:47:42.865541 | orchestrator | + set -e 2026-04-16 04:47:42.865556 | orchestrator | + pushd /opt/configuration 2026-04-16 04:47:42.865568 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 04:47:42.866346 | orchestrator | + source /opt/venv/bin/activate 2026-04-16 04:47:42.868367 | orchestrator | ++ deactivate nondestructive 2026-04-16 04:47:42.868443 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:42.868461 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:42.868539 | orchestrator | ++ hash -r 2026-04-16 04:47:42.868553 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:42.868564 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-16 04:47:42.868575 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-16 04:47:42.868586 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-16 04:47:42.868597 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-16 04:47:42.868608 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-16 04:47:42.868619 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-16 04:47:42.868630 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-16 04:47:42.868642 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 04:47:42.868654 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 04:47:42.868665 | orchestrator | ++ export PATH 2026-04-16 04:47:42.868676 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:42.868687 | orchestrator | ++ '[' -z '' ']' 2026-04-16 04:47:42.868698 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-16 04:47:42.868710 | orchestrator | ++ PS1='(venv) ' 2026-04-16 04:47:42.868721 | orchestrator | ++ export PS1 2026-04-16 04:47:42.868732 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-16 04:47:42.868743 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-16 04:47:42.868753 | orchestrator | ++ hash -r 2026-04-16 04:47:42.868765 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-16 04:47:43.788169 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-16 04:47:43.789347 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-16 04:47:43.790886 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-16 04:47:43.792355 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-16 04:47:43.793656 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-16 04:47:43.803574 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-16 04:47:43.805004 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-16 04:47:43.806244 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-16 04:47:43.807569 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-16 04:47:43.837664 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-16 04:47:43.837920 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-16 04:47:43.839871 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-16 04:47:43.841337 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-16 04:47:43.845304 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-16 04:47:44.037081 | orchestrator | ++ which gilt 2026-04-16 04:47:44.040933 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-16 04:47:44.040995 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-16 04:47:44.265654 | orchestrator | osism.cfg-generics: 2026-04-16 04:47:44.397667 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-16 04:47:44.397779 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-16 04:47:44.397867 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-16 04:47:44.397883 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-16 04:47:45.171968 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-16 04:47:45.183306 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-16 04:47:45.524805 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-16 04:47:45.571693 | orchestrator | ~ 2026-04-16 04:47:45.571816 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 04:47:45.571830 | orchestrator | + deactivate 2026-04-16 04:47:45.571841 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-16 04:47:45.571852 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 04:47:45.571861 | orchestrator | + export PATH 2026-04-16 04:47:45.571870 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-16 04:47:45.571880 | orchestrator | + '[' -n '' ']' 2026-04-16 04:47:45.571891 | orchestrator | + hash -r 2026-04-16 04:47:45.571900 | orchestrator | + '[' -n '' ']' 2026-04-16 04:47:45.571909 | orchestrator | + unset VIRTUAL_ENV 2026-04-16 04:47:45.571918 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-16 04:47:45.571927 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-16 04:47:45.571936 | orchestrator | + unset -f deactivate 2026-04-16 04:47:45.571945 | orchestrator | + popd 2026-04-16 04:47:45.572507 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 04:47:45.572524 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-04-16 04:47:45.573469 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-16 04:47:45.619561 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 04:47:45.619640 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-04-16 04:47:45.619655 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 04:47:45.620013 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-16 04:47:45.667755 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 04:47:45.668679 | orchestrator | ++ semver 2024.2 2025.1 2026-04-16 04:47:45.721805 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 04:47:45.721903 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-04-16 04:47:45.805538 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 04:47:45.805671 | orchestrator | + source /opt/venv/bin/activate 2026-04-16 04:47:45.805699 | orchestrator | ++ deactivate nondestructive 2026-04-16 04:47:45.805721 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:45.805741 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:45.805761 | orchestrator | ++ hash -r 2026-04-16 04:47:45.805781 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:45.805802 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-16 04:47:45.805824 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-16 04:47:45.805845 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-16 04:47:45.805886 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-16 04:47:45.805908 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-16 04:47:45.805928 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-16 04:47:45.805947 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-16 04:47:45.805968 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 04:47:45.806082 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 04:47:45.806111 | orchestrator | ++ export PATH 2026-04-16 04:47:45.806131 | orchestrator | ++ '[' -n '' ']' 2026-04-16 04:47:45.806152 | orchestrator | ++ '[' -z '' ']' 2026-04-16 04:47:45.806181 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-16 04:47:45.806202 | orchestrator | ++ PS1='(venv) ' 2026-04-16 04:47:45.806222 | orchestrator | ++ export PS1 2026-04-16 04:47:45.806241 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-16 04:47:45.806261 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-16 04:47:45.806279 | orchestrator | ++ hash -r 2026-04-16 04:47:45.806306 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-04-16 04:47:46.838965 | orchestrator | 2026-04-16 04:47:46.839084 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-04-16 04:47:46.839101 | orchestrator | 2026-04-16 04:47:46.839113 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-16 04:47:47.375248 | orchestrator | ok: [testbed-manager] 2026-04-16 04:47:47.375370 | orchestrator | 2026-04-16 04:47:47.375393 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-16 04:47:48.324679 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:48.324811 | orchestrator | 2026-04-16 04:47:48.324829 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-04-16 04:47:48.324842 | orchestrator | 2026-04-16 04:47:48.324857 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:47:50.503860 | orchestrator | ok: [testbed-manager] 2026-04-16 04:47:50.504024 | orchestrator | 2026-04-16 04:47:50.504070 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-04-16 04:47:50.559536 | orchestrator | ok: [testbed-manager] 2026-04-16 04:47:50.559662 | orchestrator | 2026-04-16 04:47:50.559680 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-04-16 04:47:50.981940 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:50.982105 | orchestrator | 2026-04-16 04:47:50.982127 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-04-16 04:47:51.013384 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:47:51.013457 | orchestrator | 2026-04-16 04:47:51.013470 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-04-16 04:47:51.327052 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:51.327153 | orchestrator | 2026-04-16 04:47:51.327169 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-04-16 04:47:51.654666 | orchestrator | ok: [testbed-manager] 2026-04-16 04:47:51.654759 | orchestrator | 2026-04-16 04:47:51.654773 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-04-16 04:47:51.759917 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:47:51.760011 | orchestrator | 2026-04-16 04:47:51.760027 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-04-16 04:47:51.760039 | orchestrator | 2026-04-16 04:47:51.760052 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:47:53.472647 | orchestrator | ok: [testbed-manager] 2026-04-16 04:47:53.472748 | orchestrator | 2026-04-16 04:47:53.472764 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-04-16 04:47:53.564404 | orchestrator | included: osism.services.traefik for testbed-manager 2026-04-16 04:47:53.564542 | orchestrator | 2026-04-16 04:47:53.564571 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-04-16 04:47:53.615788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-04-16 04:47:53.615885 | orchestrator | 2026-04-16 04:47:53.615899 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-04-16 04:47:54.734798 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-04-16 04:47:54.734903 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-04-16 04:47:54.734919 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-04-16 04:47:54.734931 | orchestrator | 2026-04-16 04:47:54.734945 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-04-16 04:47:56.463295 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-04-16 04:47:56.463409 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-04-16 04:47:56.463425 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-04-16 04:47:56.463437 | orchestrator | 2026-04-16 04:47:56.463450 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-04-16 04:47:57.118666 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-16 04:47:57.118759 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:57.118774 | orchestrator | 2026-04-16 04:47:57.118785 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-04-16 04:47:57.730536 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-16 04:47:57.730640 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:57.730657 | orchestrator | 2026-04-16 04:47:57.730669 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-04-16 04:47:57.786116 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:47:57.786210 | orchestrator | 2026-04-16 04:47:57.786225 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-04-16 04:47:58.138127 | orchestrator | ok: [testbed-manager] 2026-04-16 04:47:58.138227 | orchestrator | 2026-04-16 04:47:58.138243 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-04-16 04:47:58.201166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-04-16 04:47:58.201253 | orchestrator | 2026-04-16 04:47:58.201266 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-04-16 04:47:59.226568 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:59.226703 | orchestrator | 2026-04-16 04:47:59.226731 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-04-16 04:47:59.949027 | orchestrator | changed: [testbed-manager] 2026-04-16 04:47:59.949128 | orchestrator | 2026-04-16 04:47:59.949145 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-04-16 04:48:13.994694 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:13.994820 | orchestrator | 2026-04-16 04:48:13.994849 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-04-16 04:48:14.049046 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:48:14.049146 | orchestrator | 2026-04-16 04:48:14.049185 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-04-16 04:48:14.049199 | orchestrator | 2026-04-16 04:48:14.049211 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:48:15.862777 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:15.862906 | orchestrator | 2026-04-16 04:48:15.862923 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-04-16 04:48:15.971136 | orchestrator | included: osism.services.manager for testbed-manager 2026-04-16 04:48:15.971231 | orchestrator | 2026-04-16 04:48:15.971246 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-16 04:48:16.023874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 04:48:16.023966 | orchestrator | 2026-04-16 04:48:16.023982 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-16 04:48:18.258376 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:18.258483 | orchestrator | 2026-04-16 04:48:18.258549 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-16 04:48:18.301735 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:18.301830 | orchestrator | 2026-04-16 04:48:18.301847 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-16 04:48:18.422728 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-16 04:48:18.422829 | orchestrator | 2026-04-16 04:48:18.422845 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-16 04:48:21.208000 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-04-16 04:48:21.208118 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-04-16 04:48:21.208134 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-16 04:48:21.208146 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-04-16 04:48:21.208157 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-16 04:48:21.208224 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-16 04:48:21.208237 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-16 04:48:21.208248 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-04-16 04:48:21.208258 | orchestrator | 2026-04-16 04:48:21.208270 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-16 04:48:21.797852 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:21.797960 | orchestrator | 2026-04-16 04:48:21.797980 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-16 04:48:22.391303 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:22.391384 | orchestrator | 2026-04-16 04:48:22.391393 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-16 04:48:22.465118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-16 04:48:22.465208 | orchestrator | 2026-04-16 04:48:22.465252 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-16 04:48:23.608372 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-04-16 04:48:23.608467 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-04-16 04:48:23.608480 | orchestrator | 2026-04-16 04:48:23.608490 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-16 04:48:24.198558 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:24.198654 | orchestrator | 2026-04-16 04:48:24.198666 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-16 04:48:24.258251 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:48:24.258356 | orchestrator | 2026-04-16 04:48:24.258373 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-16 04:48:24.332412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-16 04:48:24.332577 | orchestrator | 2026-04-16 04:48:24.332597 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-16 04:48:24.929138 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:24.929239 | orchestrator | 2026-04-16 04:48:24.929256 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-16 04:48:24.995368 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-16 04:48:24.995458 | orchestrator | 2026-04-16 04:48:24.995472 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-16 04:48:26.325315 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-16 04:48:26.325435 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-16 04:48:26.325450 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:26.325462 | orchestrator | 2026-04-16 04:48:26.325544 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-16 04:48:26.922971 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:26.923075 | orchestrator | 2026-04-16 04:48:26.923090 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-16 04:48:26.975110 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:48:26.975202 | orchestrator | 2026-04-16 04:48:26.975216 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-16 04:48:27.079071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-16 04:48:27.079174 | orchestrator | 2026-04-16 04:48:27.079191 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-16 04:48:27.572678 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:27.572797 | orchestrator | 2026-04-16 04:48:27.572822 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-16 04:48:27.972472 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:27.972602 | orchestrator | 2026-04-16 04:48:27.972615 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-16 04:48:29.126667 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-04-16 04:48:29.126773 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-04-16 04:48:29.126789 | orchestrator | 2026-04-16 04:48:29.126803 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-16 04:48:29.747498 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:29.747648 | orchestrator | 2026-04-16 04:48:29.747664 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-16 04:48:30.123592 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:30.123665 | orchestrator | 2026-04-16 04:48:30.123673 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-16 04:48:30.450290 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:30.450387 | orchestrator | 2026-04-16 04:48:30.450401 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-16 04:48:30.501866 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:48:30.501946 | orchestrator | 2026-04-16 04:48:30.501956 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-16 04:48:30.569822 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-16 04:48:30.569918 | orchestrator | 2026-04-16 04:48:30.569934 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-16 04:48:30.602254 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:30.602370 | orchestrator | 2026-04-16 04:48:30.602386 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-16 04:48:32.505860 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-04-16 04:48:32.505972 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-04-16 04:48:32.505989 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-04-16 04:48:32.506002 | orchestrator | 2026-04-16 04:48:32.506077 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-16 04:48:33.175794 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:33.175898 | orchestrator | 2026-04-16 04:48:33.175915 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-16 04:48:33.831085 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:33.831194 | orchestrator | 2026-04-16 04:48:33.831212 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-16 04:48:34.495375 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:34.495456 | orchestrator | 2026-04-16 04:48:34.495467 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-16 04:48:34.571452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-16 04:48:34.571621 | orchestrator | 2026-04-16 04:48:34.571654 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-16 04:48:34.606407 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:34.606462 | orchestrator | 2026-04-16 04:48:34.606476 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-16 04:48:35.262779 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-04-16 04:48:35.262880 | orchestrator | 2026-04-16 04:48:35.262895 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-16 04:48:35.338677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-16 04:48:35.338777 | orchestrator | 2026-04-16 04:48:35.338793 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-16 04:48:36.004268 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:36.004370 | orchestrator | 2026-04-16 04:48:36.004388 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-16 04:48:36.599119 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:36.599222 | orchestrator | 2026-04-16 04:48:36.599239 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-16 04:48:36.639212 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:48:36.639299 | orchestrator | 2026-04-16 04:48:36.639315 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-16 04:48:36.699990 | orchestrator | ok: [testbed-manager] 2026-04-16 04:48:36.700077 | orchestrator | 2026-04-16 04:48:36.700087 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-16 04:48:37.477676 | orchestrator | changed: [testbed-manager] 2026-04-16 04:48:37.477766 | orchestrator | 2026-04-16 04:48:37.477781 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-16 04:49:41.029309 | orchestrator | changed: [testbed-manager] 2026-04-16 04:49:41.029439 | orchestrator | 2026-04-16 04:49:41.029460 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-16 04:49:41.910162 | orchestrator | ok: [testbed-manager] 2026-04-16 04:49:41.910260 | orchestrator | 2026-04-16 04:49:41.910275 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-16 04:49:41.953990 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:49:41.954145 | orchestrator | 2026-04-16 04:49:41.954173 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-16 04:49:48.217564 | orchestrator | changed: [testbed-manager] 2026-04-16 04:49:48.217778 | orchestrator | 2026-04-16 04:49:48.217801 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-16 04:49:48.264915 | orchestrator | ok: [testbed-manager] 2026-04-16 04:49:48.264996 | orchestrator | 2026-04-16 04:49:48.265007 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-16 04:49:48.265016 | orchestrator | 2026-04-16 04:49:48.265023 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-16 04:49:48.412223 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:49:48.412318 | orchestrator | 2026-04-16 04:49:48.412335 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-16 04:50:48.470212 | orchestrator | Pausing for 60 seconds 2026-04-16 04:50:48.470338 | orchestrator | changed: [testbed-manager] 2026-04-16 04:50:48.470356 | orchestrator | 2026-04-16 04:50:48.470370 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-16 04:50:51.020168 | orchestrator | changed: [testbed-manager] 2026-04-16 04:50:51.020285 | orchestrator | 2026-04-16 04:50:51.020307 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-16 04:51:32.441780 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-16 04:51:32.441870 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-16 04:51:32.441880 | orchestrator | changed: [testbed-manager] 2026-04-16 04:51:32.441889 | orchestrator | 2026-04-16 04:51:32.441960 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-16 04:51:42.039355 | orchestrator | changed: [testbed-manager] 2026-04-16 04:51:42.039474 | orchestrator | 2026-04-16 04:51:42.039493 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-16 04:51:42.134480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-16 04:51:42.134590 | orchestrator | 2026-04-16 04:51:42.134605 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-16 04:51:42.134618 | orchestrator | 2026-04-16 04:51:42.134629 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-16 04:51:42.185807 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:51:42.185879 | orchestrator | 2026-04-16 04:51:42.185900 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-16 04:51:42.261167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-16 04:51:42.261251 | orchestrator | 2026-04-16 04:51:42.261264 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-16 04:51:43.032474 | orchestrator | changed: [testbed-manager] 2026-04-16 04:51:43.032573 | orchestrator | 2026-04-16 04:51:43.032590 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-16 04:51:46.080829 | orchestrator | ok: [testbed-manager] 2026-04-16 04:51:46.080933 | orchestrator | 2026-04-16 04:51:46.081007 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-16 04:51:46.140549 | orchestrator | ok: [testbed-manager] => { 2026-04-16 04:51:46.140648 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-16 04:51:46.140671 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-16 04:51:46.140690 | orchestrator | "Checking running containers against expected versions...", 2026-04-16 04:51:46.140710 | orchestrator | "", 2026-04-16 04:51:46.140730 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-16 04:51:46.140750 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-16 04:51:46.140763 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.140774 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-04-16 04:51:46.140785 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.140797 | orchestrator | "", 2026-04-16 04:51:46.140808 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-16 04:51:46.140820 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-16 04:51:46.140858 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.140870 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-04-16 04:51:46.140881 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.140892 | orchestrator | "", 2026-04-16 04:51:46.140903 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-16 04:51:46.140914 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-16 04:51:46.140925 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.140935 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-04-16 04:51:46.140981 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.140992 | orchestrator | "", 2026-04-16 04:51:46.141003 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-16 04:51:46.141015 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-16 04:51:46.141026 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141038 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-04-16 04:51:46.141049 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141060 | orchestrator | "", 2026-04-16 04:51:46.141071 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-16 04:51:46.141086 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-16 04:51:46.141099 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141112 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-04-16 04:51:46.141124 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141136 | orchestrator | "", 2026-04-16 04:51:46.141148 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-16 04:51:46.141160 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141172 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141184 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141196 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141208 | orchestrator | "", 2026-04-16 04:51:46.141220 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-16 04:51:46.141233 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-16 04:51:46.141245 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141258 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-16 04:51:46.141271 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141283 | orchestrator | "", 2026-04-16 04:51:46.141295 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-16 04:51:46.141307 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-16 04:51:46.141320 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141332 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-16 04:51:46.141344 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141357 | orchestrator | "", 2026-04-16 04:51:46.141369 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-16 04:51:46.141381 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-16 04:51:46.141394 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141407 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-04-16 04:51:46.141420 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141432 | orchestrator | "", 2026-04-16 04:51:46.141444 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-16 04:51:46.141455 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-16 04:51:46.141466 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141477 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-16 04:51:46.141488 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141499 | orchestrator | "", 2026-04-16 04:51:46.141509 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-16 04:51:46.141520 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141540 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141551 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141562 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141573 | orchestrator | "", 2026-04-16 04:51:46.141584 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-16 04:51:46.141595 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141606 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141617 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141628 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141640 | orchestrator | "", 2026-04-16 04:51:46.141651 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-16 04:51:46.141662 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141673 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141684 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141695 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141706 | orchestrator | "", 2026-04-16 04:51:46.141717 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-16 04:51:46.141728 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141738 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141750 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141779 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141790 | orchestrator | "", 2026-04-16 04:51:46.141801 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-16 04:51:46.141812 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141823 | orchestrator | " Enabled: true", 2026-04-16 04:51:46.141844 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-04-16 04:51:46.141856 | orchestrator | " Status: ✅ MATCH", 2026-04-16 04:51:46.141867 | orchestrator | "", 2026-04-16 04:51:46.141878 | orchestrator | "=== Summary ===", 2026-04-16 04:51:46.141889 | orchestrator | "Errors (version mismatches): 0", 2026-04-16 04:51:46.141900 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-16 04:51:46.141911 | orchestrator | "", 2026-04-16 04:51:46.141922 | orchestrator | "✅ All running containers match expected versions!" 2026-04-16 04:51:46.141933 | orchestrator | ] 2026-04-16 04:51:46.141972 | orchestrator | } 2026-04-16 04:51:46.141984 | orchestrator | 2026-04-16 04:51:46.141995 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-16 04:51:46.193877 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:51:46.194077 | orchestrator | 2026-04-16 04:51:46.194097 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:51:46.194111 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-04-16 04:51:46.194123 | orchestrator | 2026-04-16 04:51:46.284005 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 04:51:46.284105 | orchestrator | + deactivate 2026-04-16 04:51:46.284121 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-16 04:51:46.284135 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 04:51:46.284146 | orchestrator | + export PATH 2026-04-16 04:51:46.284157 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-16 04:51:46.284169 | orchestrator | + '[' -n '' ']' 2026-04-16 04:51:46.284193 | orchestrator | + hash -r 2026-04-16 04:51:46.284213 | orchestrator | + '[' -n '' ']' 2026-04-16 04:51:46.284224 | orchestrator | + unset VIRTUAL_ENV 2026-04-16 04:51:46.284235 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-16 04:51:46.284247 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-16 04:51:46.284258 | orchestrator | + unset -f deactivate 2026-04-16 04:51:46.284271 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-04-16 04:51:46.290167 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 04:51:46.290205 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-16 04:51:46.290217 | orchestrator | + local max_attempts=60 2026-04-16 04:51:46.290254 | orchestrator | + local name=ceph-ansible 2026-04-16 04:51:46.290265 | orchestrator | + local attempt_num=1 2026-04-16 04:51:46.290934 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 04:51:46.329619 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 04:51:46.329721 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-16 04:51:46.329739 | orchestrator | + local max_attempts=60 2026-04-16 04:51:46.329752 | orchestrator | + local name=kolla-ansible 2026-04-16 04:51:46.329764 | orchestrator | + local attempt_num=1 2026-04-16 04:51:46.330488 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-16 04:51:46.362147 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 04:51:46.362243 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-16 04:51:46.362258 | orchestrator | + local max_attempts=60 2026-04-16 04:51:46.362271 | orchestrator | + local name=osism-ansible 2026-04-16 04:51:46.362319 | orchestrator | + local attempt_num=1 2026-04-16 04:51:46.362345 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-16 04:51:46.387341 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 04:51:46.387457 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-16 04:51:46.387484 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-16 04:51:47.039587 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-16 04:51:47.208881 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-16 04:51:47.209025 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209035 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209041 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-04-16 04:51:47.209048 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-04-16 04:51:47.209068 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209073 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209077 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-04-16 04:51:47.209082 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209087 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-04-16 04:51:47.209091 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209096 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-04-16 04:51:47.209100 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209120 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-04-16 04:51:47.209125 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.209130 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-04-16 04:51:47.215058 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-16 04:51:47.259877 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 04:51:47.259984 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-04-16 04:51:47.263282 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-04-16 04:51:59.483687 | orchestrator | 2026-04-16 04:51:59 | INFO  | Task e110f302-3f98-4133-96d5-bffe90198cc3 (resolvconf) was prepared for execution. 2026-04-16 04:51:59.483769 | orchestrator | 2026-04-16 04:51:59 | INFO  | It takes a moment until task e110f302-3f98-4133-96d5-bffe90198cc3 (resolvconf) has been started and output is visible here. 2026-04-16 04:52:11.683152 | orchestrator | 2026-04-16 04:52:11.683272 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-04-16 04:52:11.683291 | orchestrator | 2026-04-16 04:52:11.683304 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:52:11.683315 | orchestrator | Thursday 16 April 2026 04:52:03 +0000 (0:00:00.101) 0:00:00.101 ******** 2026-04-16 04:52:11.683326 | orchestrator | ok: [testbed-manager] 2026-04-16 04:52:11.683338 | orchestrator | 2026-04-16 04:52:11.683350 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-16 04:52:11.683363 | orchestrator | Thursday 16 April 2026 04:52:06 +0000 (0:00:03.229) 0:00:03.330 ******** 2026-04-16 04:52:11.683373 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:52:11.683386 | orchestrator | 2026-04-16 04:52:11.683397 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-16 04:52:11.683408 | orchestrator | Thursday 16 April 2026 04:52:06 +0000 (0:00:00.051) 0:00:03.382 ******** 2026-04-16 04:52:11.683419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-04-16 04:52:11.683431 | orchestrator | 2026-04-16 04:52:11.683442 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-16 04:52:11.683453 | orchestrator | Thursday 16 April 2026 04:52:06 +0000 (0:00:00.079) 0:00:03.462 ******** 2026-04-16 04:52:11.683510 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 04:52:11.683522 | orchestrator | 2026-04-16 04:52:11.683534 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-16 04:52:11.683545 | orchestrator | Thursday 16 April 2026 04:52:06 +0000 (0:00:00.063) 0:00:03.525 ******** 2026-04-16 04:52:11.683556 | orchestrator | ok: [testbed-manager] 2026-04-16 04:52:11.683567 | orchestrator | 2026-04-16 04:52:11.683578 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-16 04:52:11.683589 | orchestrator | Thursday 16 April 2026 04:52:07 +0000 (0:00:00.830) 0:00:04.356 ******** 2026-04-16 04:52:11.683600 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:52:11.683611 | orchestrator | 2026-04-16 04:52:11.683622 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-16 04:52:11.683633 | orchestrator | Thursday 16 April 2026 04:52:07 +0000 (0:00:00.049) 0:00:04.405 ******** 2026-04-16 04:52:11.683667 | orchestrator | ok: [testbed-manager] 2026-04-16 04:52:11.683680 | orchestrator | 2026-04-16 04:52:11.683693 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-16 04:52:11.683705 | orchestrator | Thursday 16 April 2026 04:52:07 +0000 (0:00:00.414) 0:00:04.820 ******** 2026-04-16 04:52:11.683717 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:52:11.683729 | orchestrator | 2026-04-16 04:52:11.683743 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-16 04:52:11.683757 | orchestrator | Thursday 16 April 2026 04:52:07 +0000 (0:00:00.076) 0:00:04.897 ******** 2026-04-16 04:52:11.683770 | orchestrator | changed: [testbed-manager] 2026-04-16 04:52:11.683782 | orchestrator | 2026-04-16 04:52:11.683795 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-16 04:52:11.683807 | orchestrator | Thursday 16 April 2026 04:52:08 +0000 (0:00:00.481) 0:00:05.379 ******** 2026-04-16 04:52:11.683819 | orchestrator | changed: [testbed-manager] 2026-04-16 04:52:11.683832 | orchestrator | 2026-04-16 04:52:11.683845 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-16 04:52:11.683857 | orchestrator | Thursday 16 April 2026 04:52:09 +0000 (0:00:01.014) 0:00:06.393 ******** 2026-04-16 04:52:11.683870 | orchestrator | ok: [testbed-manager] 2026-04-16 04:52:11.683883 | orchestrator | 2026-04-16 04:52:11.683895 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-16 04:52:11.683908 | orchestrator | Thursday 16 April 2026 04:52:10 +0000 (0:00:00.913) 0:00:07.306 ******** 2026-04-16 04:52:11.683920 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-04-16 04:52:11.683932 | orchestrator | 2026-04-16 04:52:11.683944 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-16 04:52:11.683956 | orchestrator | Thursday 16 April 2026 04:52:10 +0000 (0:00:00.077) 0:00:07.384 ******** 2026-04-16 04:52:11.683968 | orchestrator | changed: [testbed-manager] 2026-04-16 04:52:11.683981 | orchestrator | 2026-04-16 04:52:11.683994 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:52:11.684007 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 04:52:11.684018 | orchestrator | 2026-04-16 04:52:11.684051 | orchestrator | 2026-04-16 04:52:11.684062 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 04:52:11.684073 | orchestrator | Thursday 16 April 2026 04:52:11 +0000 (0:00:01.084) 0:00:08.468 ******** 2026-04-16 04:52:11.684083 | orchestrator | =============================================================================== 2026-04-16 04:52:11.684094 | orchestrator | Gathering Facts --------------------------------------------------------- 3.23s 2026-04-16 04:52:11.684105 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2026-04-16 04:52:11.684115 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.01s 2026-04-16 04:52:11.684126 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.91s 2026-04-16 04:52:11.684137 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.83s 2026-04-16 04:52:11.684147 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2026-04-16 04:52:11.684177 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.42s 2026-04-16 04:52:11.684189 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-04-16 04:52:11.684199 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-04-16 04:52:11.684210 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-04-16 04:52:11.684221 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2026-04-16 04:52:11.684231 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-04-16 04:52:11.684250 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2026-04-16 04:52:11.925817 | orchestrator | + osism apply sshconfig 2026-04-16 04:52:23.871727 | orchestrator | 2026-04-16 04:52:23 | INFO  | Task 0dff0ee8-8138-4cc3-8f72-922ca903132d (sshconfig) was prepared for execution. 2026-04-16 04:52:23.871855 | orchestrator | 2026-04-16 04:52:23 | INFO  | It takes a moment until task 0dff0ee8-8138-4cc3-8f72-922ca903132d (sshconfig) has been started and output is visible here. 2026-04-16 04:52:34.042409 | orchestrator | 2026-04-16 04:52:34.042527 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-04-16 04:52:34.042543 | orchestrator | 2026-04-16 04:52:34.042578 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-04-16 04:52:34.042591 | orchestrator | Thursday 16 April 2026 04:52:27 +0000 (0:00:00.112) 0:00:00.112 ******** 2026-04-16 04:52:34.042602 | orchestrator | ok: [testbed-manager] 2026-04-16 04:52:34.042614 | orchestrator | 2026-04-16 04:52:34.042625 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-04-16 04:52:34.042636 | orchestrator | Thursday 16 April 2026 04:52:27 +0000 (0:00:00.484) 0:00:00.596 ******** 2026-04-16 04:52:34.042648 | orchestrator | changed: [testbed-manager] 2026-04-16 04:52:34.042660 | orchestrator | 2026-04-16 04:52:34.042671 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-04-16 04:52:34.042682 | orchestrator | Thursday 16 April 2026 04:52:28 +0000 (0:00:00.435) 0:00:01.032 ******** 2026-04-16 04:52:34.042692 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-04-16 04:52:34.042704 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-04-16 04:52:34.042715 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-04-16 04:52:34.042726 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-04-16 04:52:34.042737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-04-16 04:52:34.042748 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-04-16 04:52:34.042759 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-04-16 04:52:34.042769 | orchestrator | 2026-04-16 04:52:34.042781 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-04-16 04:52:34.042792 | orchestrator | Thursday 16 April 2026 04:52:33 +0000 (0:00:04.961) 0:00:05.993 ******** 2026-04-16 04:52:34.042802 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:52:34.042813 | orchestrator | 2026-04-16 04:52:34.042824 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-04-16 04:52:34.042835 | orchestrator | Thursday 16 April 2026 04:52:33 +0000 (0:00:00.065) 0:00:06.058 ******** 2026-04-16 04:52:34.042846 | orchestrator | changed: [testbed-manager] 2026-04-16 04:52:34.042857 | orchestrator | 2026-04-16 04:52:34.042868 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:52:34.042880 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 04:52:34.042891 | orchestrator | 2026-04-16 04:52:34.042902 | orchestrator | 2026-04-16 04:52:34.042913 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 04:52:34.042924 | orchestrator | Thursday 16 April 2026 04:52:33 +0000 (0:00:00.464) 0:00:06.523 ******** 2026-04-16 04:52:34.042938 | orchestrator | =============================================================================== 2026-04-16 04:52:34.042951 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.96s 2026-04-16 04:52:34.042963 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.48s 2026-04-16 04:52:34.042976 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.46s 2026-04-16 04:52:34.042988 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.44s 2026-04-16 04:52:34.043023 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-04-16 04:52:34.212473 | orchestrator | + osism apply known-hosts 2026-04-16 04:52:46.080602 | orchestrator | 2026-04-16 04:52:46 | INFO  | Task 70c5c14d-ab8c-4dc3-b213-c0ea554c5a89 (known-hosts) was prepared for execution. 2026-04-16 04:52:46.080721 | orchestrator | 2026-04-16 04:52:46 | INFO  | It takes a moment until task 70c5c14d-ab8c-4dc3-b213-c0ea554c5a89 (known-hosts) has been started and output is visible here. 2026-04-16 04:53:01.482330 | orchestrator | 2026-04-16 04:53:01.482441 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-04-16 04:53:01.482457 | orchestrator | 2026-04-16 04:53:01.482466 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-04-16 04:53:01.482476 | orchestrator | Thursday 16 April 2026 04:52:49 +0000 (0:00:00.121) 0:00:00.121 ******** 2026-04-16 04:53:01.482485 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-16 04:53:01.482501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-16 04:53:01.482510 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-16 04:53:01.482518 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-16 04:53:01.482526 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-16 04:53:01.482533 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-16 04:53:01.482541 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-16 04:53:01.482549 | orchestrator | 2026-04-16 04:53:01.482557 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-04-16 04:53:01.482567 | orchestrator | Thursday 16 April 2026 04:52:55 +0000 (0:00:05.592) 0:00:05.713 ******** 2026-04-16 04:53:01.482576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-16 04:53:01.482586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-16 04:53:01.482594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-16 04:53:01.482601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-16 04:53:01.482608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-16 04:53:01.482628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-16 04:53:01.482635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-16 04:53:01.482643 | orchestrator | 2026-04-16 04:53:01.482652 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:01.482660 | orchestrator | Thursday 16 April 2026 04:52:55 +0000 (0:00:00.145) 0:00:05.859 ******** 2026-04-16 04:53:01.482674 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQo7OmkhOSl+047BBCX2DwxlE8M1iHYrJU+ejViRv5Mftz+xRHiKt9pMjzZYI0dOc4CMWSwZWWXqxrQBaiulaN9YVM51mt27rCKekKJf3vnTH+Qadz/I+lznSJunHnw0GH85DlLqXJL7xmeDFbfMHTZZxVNXbYsQwKAUZURCFNcT3zy6tN+EojosA0AECJObyynwdZi/BuPaegWE70Kr3cAVtS0ReLTeKZA1lJ/KU1DT0q0rnQ/1LhXSiXnd9YHjIH/49QANvmgTT9ko/BDMRv9a8X0XIiVylou14dMi6uUWZvOS6il40OU9bcovUmB+v37U8Jf4W2ignEn9nHFbiyhoCzeNdQPH60GbsHIf+BJqV0v378ie6vps99YPPjb4iu8kVzUTiRm+t23pFskw1fCosNk0cdYf9yKT5XBMMNFxArwg7FKBfaHq4fUxrIFs1qHLCgChx0Ugful4n7IDYpjGhbvVMnLFv+ilueUS4rsDSh2V/m2f4V3JdfeiiTxlU=) 2026-04-16 04:53:01.482706 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGFtoq7CbUvnjtZY2UaH4OYTSTbZJ0wmGII+KWKK2J7FuVTXDowGr1fvrzaQCZoXXyA9H8LaubNPbO1f5cQsTYA=) 2026-04-16 04:53:01.482716 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMdl/S2FlHl7WJwwYU89pixIOT6+uVMhj67ZspRHfXZt) 2026-04-16 04:53:01.482726 | orchestrator | 2026-04-16 04:53:01.482733 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:01.482741 | orchestrator | Thursday 16 April 2026 04:52:56 +0000 (0:00:01.024) 0:00:06.883 ******** 2026-04-16 04:53:01.482768 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9QarJN350Z9zZvCV4x/hGJb4+8OVYcZat8QXU1bAgopz5olRDoh2sBFS5uBeEV2SmB8If/UCVNXYYUuSmq8Q+LvjJRHevld5TDeUC0XA92W4T1WQ8b5D0VafBBAGiq33TCFDuLngDvlfR0PMJADFWq+6NJTW5YpafQPfEw3swU1skfGktfOHZwwxe/5XjK6z188DEfCKLZ9xFLOHk39XqbmZKJGVjFJ4mbTYNYZ0/GE65Bz0AdYCepOsGRWd0oFQM6NPSUMPYPK2E+ZiBjddwFHkJo/e3C4AZ2jpUjJNnQJ8GbSy7luZ+TQ/HMeWPlBkJ/D3SiDMikDrBJj5SBEnSOuzUK5LuHmPuwFKRJWAFXDqEne0caZGztDDECiN8AsTTX0q7X88f5zl5qe7FRKY2UqRvlxtMLtWvdc3qjmgqU3LX8xZ9xayNVDky/mv9DkXd7XNA/pN1gwOakgut7nofBx3VTE+H2ote2iqGbvq3t0l85l/TjnAFQZMs4eI618U=) 2026-04-16 04:53:01.482776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA47+2W2QVwjUewU6YiIgVwxXXSgHw/iGJVL/jECa8dPr6lTI4rCrgzO8NAHkU0jF9M48MN++OhfkerQBQBk7O0=) 2026-04-16 04:53:01.482784 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCOPgVYvf680xPjfDWeho1Qt8+ANHx0natMCmtoC9Bg) 2026-04-16 04:53:01.482792 | orchestrator | 2026-04-16 04:53:01.482799 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:01.482806 | orchestrator | Thursday 16 April 2026 04:52:57 +0000 (0:00:00.906) 0:00:07.790 ******** 2026-04-16 04:53:01.482814 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxh15X8xNl1tB0Ym6pXAIFlY0I6iZM0HkBd577QUVsFZtryNBwZY73GU9pVi2htd8U2olZj8Ng/k6uj+bz23Cb0gBmuliilV3yho6DddT8BV9W3ps+Wa/12irTZ/wL2OGzKelZ4DP+Oh+v5CiZM2rrhTOiQN5dxpd13r5z3oe8UZJwCVyouUlL9Ir5ctNxK+nwlkdfoPP0f87PPrG5l5a613G4OBBsoG3KXWf1OYiQD21Rlo9lxDcy5jtBJpIkkbmC7jtzvD7r2sdw9N7vMC3wE3tumhtxEhgb1P/jzQRWCS06XTXqHDKEeMa1hF7bgfkMNVpzNUvPHrEi1NmFcCqS7VCx9K5AtwMzc5G7FhnMk2vE7Ufmjca/pSnAdP6hZSwUYncNSb56xsE6S0mLAhw7qLKRA1Hh89O4V59fMwDJNC4e0/w8tWQyr8s9vIuev+9yzvWK8zFKOUZR8TZ3l+48gxTy+y+B+5S+xrPWylaKVaPZfSLbIAxb18iy27ZsXnk=) 2026-04-16 04:53:01.482821 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMVjiKbdcWjEJI2qTwPRwaDXYqwN6MV8FmWLvKcxZYyAjwjIj8BxdTTxrOQxkyxFh8UUsQkTYkCG0G3o5jRgqBE=) 2026-04-16 04:53:01.482829 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIME3CBDAlwGcQcrdcw65KaI1PoqvCNGLYyIXscl98utS) 2026-04-16 04:53:01.482837 | orchestrator | 2026-04-16 04:53:01.482845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:01.482852 | orchestrator | Thursday 16 April 2026 04:52:58 +0000 (0:00:00.921) 0:00:08.711 ******** 2026-04-16 04:53:01.482860 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjjAkwEzOcJUuCBfYqtxRMjAU7x301nLsnpqUtIJmMeRdn+vRzoGMiLT3+bpi0j22772tAXUpXe7nudaXLNRC1igghdROGt4psstFGfCtEb6wDYf25/k5LyOxc3ASnONTkN+MxzUCB6li69BNRDflW7wJwQV1+UEidoRZDQH0aXImxy5L25NY7ZM7p7PSnmnMaUWca6U7GEQRdbvnaJoxmR1WsGyZEDLhlC1Uw2AhRTHNnakcHRyJDxRRv2Mwz8JNkcM+bNxMmCmKfu8lS9pnBAEuzHUhXOo6uqaN/Tn8Y0eNifVutgcunZ8LUg8GnlqFhgKSBvX4HpFvS74/r3a8fFpfnU8F2FkkMkh+G23QX9H0Y7w3nq5T+LWxDkgTxQ6nFO9WBFQ6Cz7ZOlPxGZ5lrvs73VfR6dbkqVr2S4GPSt+TSi0Geg+4DBxhQDuSWK8JwGOYR2YEHMgBoEoSUSB5/p79j/8kzxE1lm7Gui5ddlJjNF25lVusDsCpkKevpb4U=) 2026-04-16 04:53:01.482875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNAgsot35QKs3LFb4J5jNApBgnwDWABx9YbigsnP7whBVHm3yyAJ8rw2koa3sJXxUnfKHUdaO8pgxpNHYJKhPWM=) 2026-04-16 04:53:01.482882 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAiWFluA6ZLf/0/6eEoPyUKNhEtubkMgPppsqgZFfcA1) 2026-04-16 04:53:01.482890 | orchestrator | 2026-04-16 04:53:01.482897 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:01.482904 | orchestrator | Thursday 16 April 2026 04:52:59 +0000 (0:00:00.995) 0:00:09.707 ******** 2026-04-16 04:53:01.482969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMOmwukGCOhEmz5tJ4daTLw1DweUQ7ZCnH2foEYUwAIh) 2026-04-16 04:53:01.482977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCLtQCqoRuXPQ9tlJ5BflZW4wZ9P45b+TXlM1ciKvJk2Tyi+JmLjIOT5z8BM1WhBbKQyOq6fmVh6+rQT3C94yiVxeDDggXJI724sXvFW5uQSyUWbot5EVpBvBHTuOyvb+9fLOPCtMBEGfjqxmjJ1moRCpx2EKTZ9ttvtJ1fnrYlIGDkKJiVRu7NMnV2GwiJVOjvdGdMz9uZuMpYWSGvlqU3kOStPc98ptGEnA4mpxUNOSMHEZZunJR6WZIhUz80b+lbbzDZYhp37fQeTIOREq8TBcYhRZhtqligh6KihzEH1N+2zH5nTGSi2T2M7YLICDM+6LtIU/C4cCa0lXEHTd0A2jF18yi5r9tFKSS02tBr+a7j5weFKsiENDaSh1ocx4ZN/FJL35s6SINJYvmE/qqg9Pw1fo+43ETgXYi0vowBe/5EqHSYHFv2Nio9fT/7blBQL9DPoOAP2Qgtk3nLTo3bpTnsVTVY0HzHnnS6YBenGMtVnKF4eQW1MoAhcw30BDM=) 2026-04-16 04:53:01.482984 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXx0EX6f1RVdyqfnvjoh4PPQz/t1QxN8WNxfm2QtOI+nlp1HkGMxOhkuVjqLzSja39+1bgUXC0CseA6Dgp98x8=) 2026-04-16 04:53:01.482991 | orchestrator | 2026-04-16 04:53:01.482999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:01.483005 | orchestrator | Thursday 16 April 2026 04:53:00 +0000 (0:00:00.992) 0:00:10.699 ******** 2026-04-16 04:53:01.483022 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQqhNttt91XwGr+7SIrWsLu34l10owJZcnt1A6s8Iv+SvMfuBUo4unserPbq9VRkX+qFSv26S57NX3I9Gb89uBwsCpDPevOtBydIlDP3A6UtxuXUCDL9yAQhEtaSIyf3GQaqv3IsrF4tYmycXLMC7WX19yx0h7rMvtQH9VrmLpEaFHfhJjOXRRs0fvwVbq4OY2vNlikYm1lqeORrFWff2F+g8RdYfvU8+BzULj9Q00PjiX+R2ZFFlxaey/osHl11MX5MM61rt4C5257Adf5nty07lZ58ysRNfmr0g45/46NtfgUNj7JpTTHnb1cTnCoThgXMxziAskquw/0mHxQf+rfvxUj+6K1kzZBFNeVIMj87Lm5qsvIk20WnzHxBj5dQ2MV3Spiyr7MTdZBgHcQwoE9LJ4xZfhQkd8p0nlQhsWoynccqXQ+CNQ0Bas/bi4Cm2YbVGXhXiRQmwoeP/R396Ti9zxY5LyImDtdaaeHHHWdlr6aDMFSFxaDKy+9oMihh8=) 2026-04-16 04:53:11.551301 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH5UVxFwqOuHGjNYvc8uTseBVcdUW4AZvkA9/87cSAiVjihfuxEO75P9jtYp0Lw2qy1VzdJPv/pQO9jWJLuAAAE=) 2026-04-16 04:53:11.551409 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG/kmMaVBqq5LWA8o9YHmQCS9A37K6sSCY2yWz0Z3SnL) 2026-04-16 04:53:11.551425 | orchestrator | 2026-04-16 04:53:11.551437 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:11.551449 | orchestrator | Thursday 16 April 2026 04:53:01 +0000 (0:00:00.995) 0:00:11.695 ******** 2026-04-16 04:53:11.551459 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAeaw0mSTe0GmGHR22KgBN0vSoIC+4ziHBs5oN0fpppC) 2026-04-16 04:53:11.551471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp1Nac0inpDtaqSjtXTDAOGiGgkM66+ULKj3kOgS7Cn5xUeIt95sIxUsUZuVU3b2tteQp1WMlgVazEWnRmyaqeNY+XcFJekCZ4Zb/2sFr7uJT9mL89ipb63ZVBl+uiJRu91u+bgP+Gk9gCdDRC2KBpjUqkFr4WAK6YuqB251F1KfWVSRWsb34AzZOnMCmKZMSLXWuo5ZZlR8fO75qSVdkggYkudtc/oePSzsK+gA+9XLKCOCdyA1GKMEW9RLQyolaJBvkHvW+EmKPePxlKfAg4Bq8ltHL0+JO6jXZWQzAmoT/6X0sseMf05wZlcl/Fmx8yDhKpbQaIly+qhZBFsOslbczHeMMY9HDCqeefiClLI0Z+74RXKakzQL+OCz4vHs5v+bzR/eX9I+BsCwIRX945juwzdqKtvIyiXyppHY92xAqmRdlNo3MKDqr0zMLV5Q4P9e9quW3qHfYO/RKg3evpEUUsFFWiLwYY2VcWTTsaSUVC1pUNa4+3lmhxgtxOU0s=) 2026-04-16 04:53:11.551507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKgYcKFblDl4NH6eFAi3AWeYUiUpdHuOkjuzS5D7E32d38M68SQM3Wvj4RN6muauaaCOEr9azoCLKFSZY9tJOHI=) 2026-04-16 04:53:11.551518 | orchestrator | 2026-04-16 04:53:11.551527 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-04-16 04:53:11.551538 | orchestrator | Thursday 16 April 2026 04:53:02 +0000 (0:00:00.990) 0:00:12.685 ******** 2026-04-16 04:53:11.551549 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-04-16 04:53:11.551559 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-04-16 04:53:11.551568 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-04-16 04:53:11.551578 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-04-16 04:53:11.551587 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-16 04:53:11.551597 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-16 04:53:11.551606 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-16 04:53:11.551616 | orchestrator | 2026-04-16 04:53:11.551626 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-04-16 04:53:11.551637 | orchestrator | Thursday 16 April 2026 04:53:07 +0000 (0:00:04.981) 0:00:17.667 ******** 2026-04-16 04:53:11.551647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-04-16 04:53:11.551659 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-04-16 04:53:11.551669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-04-16 04:53:11.551678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-04-16 04:53:11.551688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-04-16 04:53:11.551697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-04-16 04:53:11.551707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-04-16 04:53:11.551716 | orchestrator | 2026-04-16 04:53:11.551727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:11.551738 | orchestrator | Thursday 16 April 2026 04:53:07 +0000 (0:00:00.162) 0:00:17.830 ******** 2026-04-16 04:53:11.551749 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMdl/S2FlHl7WJwwYU89pixIOT6+uVMhj67ZspRHfXZt) 2026-04-16 04:53:11.551806 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQo7OmkhOSl+047BBCX2DwxlE8M1iHYrJU+ejViRv5Mftz+xRHiKt9pMjzZYI0dOc4CMWSwZWWXqxrQBaiulaN9YVM51mt27rCKekKJf3vnTH+Qadz/I+lznSJunHnw0GH85DlLqXJL7xmeDFbfMHTZZxVNXbYsQwKAUZURCFNcT3zy6tN+EojosA0AECJObyynwdZi/BuPaegWE70Kr3cAVtS0ReLTeKZA1lJ/KU1DT0q0rnQ/1LhXSiXnd9YHjIH/49QANvmgTT9ko/BDMRv9a8X0XIiVylou14dMi6uUWZvOS6il40OU9bcovUmB+v37U8Jf4W2ignEn9nHFbiyhoCzeNdQPH60GbsHIf+BJqV0v378ie6vps99YPPjb4iu8kVzUTiRm+t23pFskw1fCosNk0cdYf9yKT5XBMMNFxArwg7FKBfaHq4fUxrIFs1qHLCgChx0Ugful4n7IDYpjGhbvVMnLFv+ilueUS4rsDSh2V/m2f4V3JdfeiiTxlU=) 2026-04-16 04:53:11.551823 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGFtoq7CbUvnjtZY2UaH4OYTSTbZJ0wmGII+KWKK2J7FuVTXDowGr1fvrzaQCZoXXyA9H8LaubNPbO1f5cQsTYA=) 2026-04-16 04:53:11.551843 | orchestrator | 2026-04-16 04:53:11.551856 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:11.551868 | orchestrator | Thursday 16 April 2026 04:53:08 +0000 (0:00:00.990) 0:00:18.820 ******** 2026-04-16 04:53:11.551881 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9QarJN350Z9zZvCV4x/hGJb4+8OVYcZat8QXU1bAgopz5olRDoh2sBFS5uBeEV2SmB8If/UCVNXYYUuSmq8Q+LvjJRHevld5TDeUC0XA92W4T1WQ8b5D0VafBBAGiq33TCFDuLngDvlfR0PMJADFWq+6NJTW5YpafQPfEw3swU1skfGktfOHZwwxe/5XjK6z188DEfCKLZ9xFLOHk39XqbmZKJGVjFJ4mbTYNYZ0/GE65Bz0AdYCepOsGRWd0oFQM6NPSUMPYPK2E+ZiBjddwFHkJo/e3C4AZ2jpUjJNnQJ8GbSy7luZ+TQ/HMeWPlBkJ/D3SiDMikDrBJj5SBEnSOuzUK5LuHmPuwFKRJWAFXDqEne0caZGztDDECiN8AsTTX0q7X88f5zl5qe7FRKY2UqRvlxtMLtWvdc3qjmgqU3LX8xZ9xayNVDky/mv9DkXd7XNA/pN1gwOakgut7nofBx3VTE+H2ote2iqGbvq3t0l85l/TjnAFQZMs4eI618U=) 2026-04-16 04:53:11.551894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA47+2W2QVwjUewU6YiIgVwxXXSgHw/iGJVL/jECa8dPr6lTI4rCrgzO8NAHkU0jF9M48MN++OhfkerQBQBk7O0=) 2026-04-16 04:53:11.551906 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBCOPgVYvf680xPjfDWeho1Qt8+ANHx0natMCmtoC9Bg) 2026-04-16 04:53:11.551918 | orchestrator | 2026-04-16 04:53:11.551931 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:11.551943 | orchestrator | Thursday 16 April 2026 04:53:09 +0000 (0:00:00.980) 0:00:19.800 ******** 2026-04-16 04:53:11.551955 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIME3CBDAlwGcQcrdcw65KaI1PoqvCNGLYyIXscl98utS) 2026-04-16 04:53:11.551967 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxh15X8xNl1tB0Ym6pXAIFlY0I6iZM0HkBd577QUVsFZtryNBwZY73GU9pVi2htd8U2olZj8Ng/k6uj+bz23Cb0gBmuliilV3yho6DddT8BV9W3ps+Wa/12irTZ/wL2OGzKelZ4DP+Oh+v5CiZM2rrhTOiQN5dxpd13r5z3oe8UZJwCVyouUlL9Ir5ctNxK+nwlkdfoPP0f87PPrG5l5a613G4OBBsoG3KXWf1OYiQD21Rlo9lxDcy5jtBJpIkkbmC7jtzvD7r2sdw9N7vMC3wE3tumhtxEhgb1P/jzQRWCS06XTXqHDKEeMa1hF7bgfkMNVpzNUvPHrEi1NmFcCqS7VCx9K5AtwMzc5G7FhnMk2vE7Ufmjca/pSnAdP6hZSwUYncNSb56xsE6S0mLAhw7qLKRA1Hh89O4V59fMwDJNC4e0/w8tWQyr8s9vIuev+9yzvWK8zFKOUZR8TZ3l+48gxTy+y+B+5S+xrPWylaKVaPZfSLbIAxb18iy27ZsXnk=) 2026-04-16 04:53:11.551979 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMVjiKbdcWjEJI2qTwPRwaDXYqwN6MV8FmWLvKcxZYyAjwjIj8BxdTTxrOQxkyxFh8UUsQkTYkCG0G3o5jRgqBE=) 2026-04-16 04:53:11.551991 | orchestrator | 2026-04-16 04:53:11.552004 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:11.552016 | orchestrator | Thursday 16 April 2026 04:53:10 +0000 (0:00:00.988) 0:00:20.789 ******** 2026-04-16 04:53:11.552028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjjAkwEzOcJUuCBfYqtxRMjAU7x301nLsnpqUtIJmMeRdn+vRzoGMiLT3+bpi0j22772tAXUpXe7nudaXLNRC1igghdROGt4psstFGfCtEb6wDYf25/k5LyOxc3ASnONTkN+MxzUCB6li69BNRDflW7wJwQV1+UEidoRZDQH0aXImxy5L25NY7ZM7p7PSnmnMaUWca6U7GEQRdbvnaJoxmR1WsGyZEDLhlC1Uw2AhRTHNnakcHRyJDxRRv2Mwz8JNkcM+bNxMmCmKfu8lS9pnBAEuzHUhXOo6uqaN/Tn8Y0eNifVutgcunZ8LUg8GnlqFhgKSBvX4HpFvS74/r3a8fFpfnU8F2FkkMkh+G23QX9H0Y7w3nq5T+LWxDkgTxQ6nFO9WBFQ6Cz7ZOlPxGZ5lrvs73VfR6dbkqVr2S4GPSt+TSi0Geg+4DBxhQDuSWK8JwGOYR2YEHMgBoEoSUSB5/p79j/8kzxE1lm7Gui5ddlJjNF25lVusDsCpkKevpb4U=) 2026-04-16 04:53:11.552040 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNAgsot35QKs3LFb4J5jNApBgnwDWABx9YbigsnP7whBVHm3yyAJ8rw2koa3sJXxUnfKHUdaO8pgxpNHYJKhPWM=) 2026-04-16 04:53:11.552063 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAiWFluA6ZLf/0/6eEoPyUKNhEtubkMgPppsqgZFfcA1) 2026-04-16 04:53:15.676413 | orchestrator | 2026-04-16 04:53:15.676511 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:15.676525 | orchestrator | Thursday 16 April 2026 04:53:11 +0000 (0:00:00.980) 0:00:21.769 ******** 2026-04-16 04:53:15.676535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCLtQCqoRuXPQ9tlJ5BflZW4wZ9P45b+TXlM1ciKvJk2Tyi+JmLjIOT5z8BM1WhBbKQyOq6fmVh6+rQT3C94yiVxeDDggXJI724sXvFW5uQSyUWbot5EVpBvBHTuOyvb+9fLOPCtMBEGfjqxmjJ1moRCpx2EKTZ9ttvtJ1fnrYlIGDkKJiVRu7NMnV2GwiJVOjvdGdMz9uZuMpYWSGvlqU3kOStPc98ptGEnA4mpxUNOSMHEZZunJR6WZIhUz80b+lbbzDZYhp37fQeTIOREq8TBcYhRZhtqligh6KihzEH1N+2zH5nTGSi2T2M7YLICDM+6LtIU/C4cCa0lXEHTd0A2jF18yi5r9tFKSS02tBr+a7j5weFKsiENDaSh1ocx4ZN/FJL35s6SINJYvmE/qqg9Pw1fo+43ETgXYi0vowBe/5EqHSYHFv2Nio9fT/7blBQL9DPoOAP2Qgtk3nLTo3bpTnsVTVY0HzHnnS6YBenGMtVnKF4eQW1MoAhcw30BDM=) 2026-04-16 04:53:15.676547 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDXx0EX6f1RVdyqfnvjoh4PPQz/t1QxN8WNxfm2QtOI+nlp1HkGMxOhkuVjqLzSja39+1bgUXC0CseA6Dgp98x8=) 2026-04-16 04:53:15.676556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMOmwukGCOhEmz5tJ4daTLw1DweUQ7ZCnH2foEYUwAIh) 2026-04-16 04:53:15.676564 | orchestrator | 2026-04-16 04:53:15.676572 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:15.676579 | orchestrator | Thursday 16 April 2026 04:53:12 +0000 (0:00:01.013) 0:00:22.783 ******** 2026-04-16 04:53:15.676587 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQqhNttt91XwGr+7SIrWsLu34l10owJZcnt1A6s8Iv+SvMfuBUo4unserPbq9VRkX+qFSv26S57NX3I9Gb89uBwsCpDPevOtBydIlDP3A6UtxuXUCDL9yAQhEtaSIyf3GQaqv3IsrF4tYmycXLMC7WX19yx0h7rMvtQH9VrmLpEaFHfhJjOXRRs0fvwVbq4OY2vNlikYm1lqeORrFWff2F+g8RdYfvU8+BzULj9Q00PjiX+R2ZFFlxaey/osHl11MX5MM61rt4C5257Adf5nty07lZ58ysRNfmr0g45/46NtfgUNj7JpTTHnb1cTnCoThgXMxziAskquw/0mHxQf+rfvxUj+6K1kzZBFNeVIMj87Lm5qsvIk20WnzHxBj5dQ2MV3Spiyr7MTdZBgHcQwoE9LJ4xZfhQkd8p0nlQhsWoynccqXQ+CNQ0Bas/bi4Cm2YbVGXhXiRQmwoeP/R396Ti9zxY5LyImDtdaaeHHHWdlr6aDMFSFxaDKy+9oMihh8=) 2026-04-16 04:53:15.676595 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH5UVxFwqOuHGjNYvc8uTseBVcdUW4AZvkA9/87cSAiVjihfuxEO75P9jtYp0Lw2qy1VzdJPv/pQO9jWJLuAAAE=) 2026-04-16 04:53:15.676603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG/kmMaVBqq5LWA8o9YHmQCS9A37K6sSCY2yWz0Z3SnL) 2026-04-16 04:53:15.676610 | orchestrator | 2026-04-16 04:53:15.676618 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-04-16 04:53:15.676626 | orchestrator | Thursday 16 April 2026 04:53:13 +0000 (0:00:00.982) 0:00:23.766 ******** 2026-04-16 04:53:15.676651 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCp1Nac0inpDtaqSjtXTDAOGiGgkM66+ULKj3kOgS7Cn5xUeIt95sIxUsUZuVU3b2tteQp1WMlgVazEWnRmyaqeNY+XcFJekCZ4Zb/2sFr7uJT9mL89ipb63ZVBl+uiJRu91u+bgP+Gk9gCdDRC2KBpjUqkFr4WAK6YuqB251F1KfWVSRWsb34AzZOnMCmKZMSLXWuo5ZZlR8fO75qSVdkggYkudtc/oePSzsK+gA+9XLKCOCdyA1GKMEW9RLQyolaJBvkHvW+EmKPePxlKfAg4Bq8ltHL0+JO6jXZWQzAmoT/6X0sseMf05wZlcl/Fmx8yDhKpbQaIly+qhZBFsOslbczHeMMY9HDCqeefiClLI0Z+74RXKakzQL+OCz4vHs5v+bzR/eX9I+BsCwIRX945juwzdqKtvIyiXyppHY92xAqmRdlNo3MKDqr0zMLV5Q4P9e9quW3qHfYO/RKg3evpEUUsFFWiLwYY2VcWTTsaSUVC1pUNa4+3lmhxgtxOU0s=) 2026-04-16 04:53:15.676661 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKgYcKFblDl4NH6eFAi3AWeYUiUpdHuOkjuzS5D7E32d38M68SQM3Wvj4RN6muauaaCOEr9azoCLKFSZY9tJOHI=) 2026-04-16 04:53:15.676668 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAeaw0mSTe0GmGHR22KgBN0vSoIC+4ziHBs5oN0fpppC) 2026-04-16 04:53:15.676675 | orchestrator | 2026-04-16 04:53:15.676682 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-04-16 04:53:15.676711 | orchestrator | Thursday 16 April 2026 04:53:14 +0000 (0:00:01.017) 0:00:24.783 ******** 2026-04-16 04:53:15.676720 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-16 04:53:15.676728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-16 04:53:15.676735 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-16 04:53:15.676741 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-16 04:53:15.676747 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-16 04:53:15.676754 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-16 04:53:15.676761 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-16 04:53:15.676767 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:53:15.676774 | orchestrator | 2026-04-16 04:53:15.676797 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-04-16 04:53:15.676805 | orchestrator | Thursday 16 April 2026 04:53:14 +0000 (0:00:00.161) 0:00:24.945 ******** 2026-04-16 04:53:15.676812 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:53:15.676818 | orchestrator | 2026-04-16 04:53:15.676825 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-04-16 04:53:15.676834 | orchestrator | Thursday 16 April 2026 04:53:14 +0000 (0:00:00.055) 0:00:25.000 ******** 2026-04-16 04:53:15.676841 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:53:15.676848 | orchestrator | 2026-04-16 04:53:15.676854 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-04-16 04:53:15.676861 | orchestrator | Thursday 16 April 2026 04:53:14 +0000 (0:00:00.052) 0:00:25.053 ******** 2026-04-16 04:53:15.676867 | orchestrator | changed: [testbed-manager] 2026-04-16 04:53:15.676874 | orchestrator | 2026-04-16 04:53:15.676880 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:53:15.676887 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 04:53:15.676896 | orchestrator | 2026-04-16 04:53:15.676902 | orchestrator | 2026-04-16 04:53:15.676909 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 04:53:15.676916 | orchestrator | Thursday 16 April 2026 04:53:15 +0000 (0:00:00.671) 0:00:25.725 ******** 2026-04-16 04:53:15.676922 | orchestrator | =============================================================================== 2026-04-16 04:53:15.676929 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.59s 2026-04-16 04:53:15.676935 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.98s 2026-04-16 04:53:15.676943 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-16 04:53:15.676949 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-04-16 04:53:15.676956 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-04-16 04:53:15.676963 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-16 04:53:15.676970 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-04-16 04:53:15.676977 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-16 04:53:15.676984 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-16 04:53:15.676991 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-16 04:53:15.676999 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-04-16 04:53:15.677005 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-16 04:53:15.677013 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-16 04:53:15.677020 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-04-16 04:53:15.677033 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-04-16 04:53:15.677040 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.91s 2026-04-16 04:53:15.677048 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.67s 2026-04-16 04:53:15.677055 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-04-16 04:53:15.677063 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-04-16 04:53:15.677070 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-04-16 04:53:15.937744 | orchestrator | + osism apply squid 2026-04-16 04:53:27.877634 | orchestrator | 2026-04-16 04:53:27 | INFO  | Task 7df96b01-443e-456b-9e4f-d9552614b543 (squid) was prepared for execution. 2026-04-16 04:53:27.877752 | orchestrator | 2026-04-16 04:53:27 | INFO  | It takes a moment until task 7df96b01-443e-456b-9e4f-d9552614b543 (squid) has been started and output is visible here. 2026-04-16 04:55:26.614356 | orchestrator | 2026-04-16 04:55:26.614460 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-04-16 04:55:26.614508 | orchestrator | 2026-04-16 04:55:26.614518 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-04-16 04:55:26.614525 | orchestrator | Thursday 16 April 2026 04:53:31 +0000 (0:00:00.115) 0:00:00.115 ******** 2026-04-16 04:55:26.614533 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 04:55:26.614541 | orchestrator | 2026-04-16 04:55:26.614547 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-04-16 04:55:26.614555 | orchestrator | Thursday 16 April 2026 04:53:31 +0000 (0:00:00.066) 0:00:00.181 ******** 2026-04-16 04:55:26.614562 | orchestrator | ok: [testbed-manager] 2026-04-16 04:55:26.614570 | orchestrator | 2026-04-16 04:55:26.614576 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-04-16 04:55:26.614583 | orchestrator | Thursday 16 April 2026 04:53:32 +0000 (0:00:01.082) 0:00:01.264 ******** 2026-04-16 04:55:26.614591 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-04-16 04:55:26.614597 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-04-16 04:55:26.614604 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-04-16 04:55:26.614610 | orchestrator | 2026-04-16 04:55:26.614617 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-04-16 04:55:26.614623 | orchestrator | Thursday 16 April 2026 04:53:33 +0000 (0:00:00.974) 0:00:02.238 ******** 2026-04-16 04:55:26.614630 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-04-16 04:55:26.614637 | orchestrator | 2026-04-16 04:55:26.614644 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-04-16 04:55:26.614649 | orchestrator | Thursday 16 April 2026 04:53:34 +0000 (0:00:00.893) 0:00:03.132 ******** 2026-04-16 04:55:26.614656 | orchestrator | ok: [testbed-manager] 2026-04-16 04:55:26.614661 | orchestrator | 2026-04-16 04:55:26.614668 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-04-16 04:55:26.614674 | orchestrator | Thursday 16 April 2026 04:53:34 +0000 (0:00:00.316) 0:00:03.448 ******** 2026-04-16 04:55:26.614681 | orchestrator | changed: [testbed-manager] 2026-04-16 04:55:26.614688 | orchestrator | 2026-04-16 04:55:26.614694 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-04-16 04:55:26.614701 | orchestrator | Thursday 16 April 2026 04:53:35 +0000 (0:00:00.855) 0:00:04.304 ******** 2026-04-16 04:55:26.614708 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-04-16 04:55:26.614718 | orchestrator | ok: [testbed-manager] 2026-04-16 04:55:26.614724 | orchestrator | 2026-04-16 04:55:26.614730 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-04-16 04:55:26.614760 | orchestrator | Thursday 16 April 2026 04:54:09 +0000 (0:00:34.181) 0:00:38.485 ******** 2026-04-16 04:55:26.614767 | orchestrator | changed: [testbed-manager] 2026-04-16 04:55:26.614773 | orchestrator | 2026-04-16 04:55:26.614779 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-04-16 04:55:26.614786 | orchestrator | Thursday 16 April 2026 04:54:25 +0000 (0:00:15.722) 0:00:54.208 ******** 2026-04-16 04:55:26.614792 | orchestrator | Pausing for 60 seconds 2026-04-16 04:55:26.614798 | orchestrator | changed: [testbed-manager] 2026-04-16 04:55:26.614805 | orchestrator | 2026-04-16 04:55:26.614811 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-04-16 04:55:26.614817 | orchestrator | Thursday 16 April 2026 04:55:25 +0000 (0:01:00.073) 0:01:54.281 ******** 2026-04-16 04:55:26.614824 | orchestrator | ok: [testbed-manager] 2026-04-16 04:55:26.614830 | orchestrator | 2026-04-16 04:55:26.614837 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-04-16 04:55:26.614843 | orchestrator | Thursday 16 April 2026 04:55:25 +0000 (0:00:00.066) 0:01:54.348 ******** 2026-04-16 04:55:26.614849 | orchestrator | changed: [testbed-manager] 2026-04-16 04:55:26.614856 | orchestrator | 2026-04-16 04:55:26.614862 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:55:26.614868 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 04:55:26.614875 | orchestrator | 2026-04-16 04:55:26.614882 | orchestrator | 2026-04-16 04:55:26.614888 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 04:55:26.614894 | orchestrator | Thursday 16 April 2026 04:55:26 +0000 (0:00:00.600) 0:01:54.948 ******** 2026-04-16 04:55:26.614901 | orchestrator | =============================================================================== 2026-04-16 04:55:26.614923 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-04-16 04:55:26.614930 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.18s 2026-04-16 04:55:26.614937 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.72s 2026-04-16 04:55:26.614943 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.08s 2026-04-16 04:55:26.614950 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.97s 2026-04-16 04:55:26.614957 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.89s 2026-04-16 04:55:26.614963 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.86s 2026-04-16 04:55:26.614970 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2026-04-16 04:55:26.614977 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-04-16 04:55:26.614983 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2026-04-16 04:55:26.614990 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-04-16 04:55:26.860911 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-04-16 04:55:26.861013 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-16 04:55:26.907173 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 04:55:26.907266 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-16 04:55:26.913938 | orchestrator | + set -e 2026-04-16 04:55:26.914076 | orchestrator | + NAMESPACE=kolla/release 2026-04-16 04:55:26.914095 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-16 04:55:26.917752 | orchestrator | ++ semver 9.5.0 9.0.0 2026-04-16 04:55:26.983342 | orchestrator | + [[ 1 -lt 0 ]] 2026-04-16 04:55:26.984172 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-04-16 04:55:38.990953 | orchestrator | 2026-04-16 04:55:38 | INFO  | Task b4c862da-34c4-4b6a-82be-712a36792fce (operator) was prepared for execution. 2026-04-16 04:55:38.991099 | orchestrator | 2026-04-16 04:55:38 | INFO  | It takes a moment until task b4c862da-34c4-4b6a-82be-712a36792fce (operator) has been started and output is visible here. 2026-04-16 04:55:55.104189 | orchestrator | 2026-04-16 04:55:55.104293 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-04-16 04:55:55.104307 | orchestrator | 2026-04-16 04:55:55.104318 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 04:55:55.104328 | orchestrator | Thursday 16 April 2026 04:55:42 +0000 (0:00:00.107) 0:00:00.107 ******** 2026-04-16 04:55:55.104337 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:55:55.104347 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:55:55.104356 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:55:55.104365 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:55:55.104375 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:55:55.104384 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:55:55.104393 | orchestrator | 2026-04-16 04:55:55.104402 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-04-16 04:55:55.104412 | orchestrator | Thursday 16 April 2026 04:55:46 +0000 (0:00:03.456) 0:00:03.563 ******** 2026-04-16 04:55:55.104421 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:55:55.104430 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:55:55.104439 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:55:55.104462 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:55:55.104471 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:55:55.104480 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:55:55.104489 | orchestrator | 2026-04-16 04:55:55.104498 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-04-16 04:55:55.104508 | orchestrator | 2026-04-16 04:55:55.104517 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-04-16 04:55:55.104575 | orchestrator | Thursday 16 April 2026 04:55:47 +0000 (0:00:01.678) 0:00:05.242 ******** 2026-04-16 04:55:55.104585 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:55:55.104594 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:55:55.104602 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:55:55.104611 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:55:55.104620 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:55:55.104629 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:55:55.104638 | orchestrator | 2026-04-16 04:55:55.104647 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-04-16 04:55:55.104656 | orchestrator | Thursday 16 April 2026 04:55:47 +0000 (0:00:00.137) 0:00:05.379 ******** 2026-04-16 04:55:55.104664 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:55:55.104673 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:55:55.104682 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:55:55.104690 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:55:55.104699 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:55:55.104707 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:55:55.104716 | orchestrator | 2026-04-16 04:55:55.104725 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-04-16 04:55:55.104734 | orchestrator | Thursday 16 April 2026 04:55:48 +0000 (0:00:00.128) 0:00:05.508 ******** 2026-04-16 04:55:55.104745 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:55:55.104757 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:55:55.104766 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:55:55.104776 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:55:55.104786 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:55:55.104796 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:55:55.104806 | orchestrator | 2026-04-16 04:55:55.104816 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-04-16 04:55:55.104826 | orchestrator | Thursday 16 April 2026 04:55:48 +0000 (0:00:00.602) 0:00:06.110 ******** 2026-04-16 04:55:55.104836 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:55:55.104846 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:55:55.104855 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:55:55.104865 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:55:55.104875 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:55:55.104885 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:55:55.104915 | orchestrator | 2026-04-16 04:55:55.104926 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-04-16 04:55:55.104936 | orchestrator | Thursday 16 April 2026 04:55:49 +0000 (0:00:00.762) 0:00:06.873 ******** 2026-04-16 04:55:55.104946 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-04-16 04:55:55.104957 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-04-16 04:55:55.104966 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-04-16 04:55:55.104976 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-04-16 04:55:55.104986 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-04-16 04:55:55.104996 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-04-16 04:55:55.105006 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-04-16 04:55:55.105016 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-04-16 04:55:55.105026 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-04-16 04:55:55.105035 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-04-16 04:55:55.105046 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-04-16 04:55:55.105056 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-04-16 04:55:55.105066 | orchestrator | 2026-04-16 04:55:55.105076 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-04-16 04:55:55.105086 | orchestrator | Thursday 16 April 2026 04:55:50 +0000 (0:00:01.244) 0:00:08.117 ******** 2026-04-16 04:55:55.105096 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:55:55.105105 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:55:55.105114 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:55:55.105122 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:55:55.105131 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:55:55.105139 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:55:55.105148 | orchestrator | 2026-04-16 04:55:55.105157 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-04-16 04:55:55.105166 | orchestrator | Thursday 16 April 2026 04:55:51 +0000 (0:00:01.130) 0:00:09.247 ******** 2026-04-16 04:55:55.105175 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-04-16 04:55:55.105184 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-04-16 04:55:55.105192 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-04-16 04:55:55.105201 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:55:55.105225 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:55:55.105234 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:55:55.105243 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:55:55.105251 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:55:55.105260 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-04-16 04:55:55.105269 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-04-16 04:55:55.105277 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-04-16 04:55:55.105286 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-04-16 04:55:55.105294 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-04-16 04:55:55.105303 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-04-16 04:55:55.105311 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-04-16 04:55:55.105320 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:55:55.105329 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:55:55.105337 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:55:55.105346 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:55:55.105354 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:55:55.105369 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-04-16 04:55:55.105378 | orchestrator | 2026-04-16 04:55:55.105387 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-04-16 04:55:55.105396 | orchestrator | Thursday 16 April 2026 04:55:53 +0000 (0:00:01.215) 0:00:10.463 ******** 2026-04-16 04:55:55.105405 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:55.105414 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:55.105422 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:55.105431 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:55.105440 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:55.105448 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:55.105457 | orchestrator | 2026-04-16 04:55:55.105466 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-04-16 04:55:55.105474 | orchestrator | Thursday 16 April 2026 04:55:53 +0000 (0:00:00.148) 0:00:10.611 ******** 2026-04-16 04:55:55.105483 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:55.105492 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:55.105500 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:55.105509 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:55.105517 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:55.105552 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:55.105561 | orchestrator | 2026-04-16 04:55:55.105570 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-04-16 04:55:55.105579 | orchestrator | Thursday 16 April 2026 04:55:53 +0000 (0:00:00.157) 0:00:10.768 ******** 2026-04-16 04:55:55.105587 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:55:55.105596 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:55:55.105604 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:55:55.105613 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:55:55.105621 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:55:55.105630 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:55:55.105638 | orchestrator | 2026-04-16 04:55:55.105647 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-04-16 04:55:55.105656 | orchestrator | Thursday 16 April 2026 04:55:53 +0000 (0:00:00.593) 0:00:11.362 ******** 2026-04-16 04:55:55.105664 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:55.105673 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:55.105681 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:55.105690 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:55.105706 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:55.105715 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:55.105724 | orchestrator | 2026-04-16 04:55:55.105734 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-04-16 04:55:55.105748 | orchestrator | Thursday 16 April 2026 04:55:54 +0000 (0:00:00.160) 0:00:11.523 ******** 2026-04-16 04:55:55.105763 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 04:55:55.105777 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-16 04:55:55.105792 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:55:55.105801 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:55:55.105809 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 04:55:55.105818 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-16 04:55:55.105827 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 04:55:55.105836 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:55:55.105844 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:55:55.105853 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:55:55.105861 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 04:55:55.105870 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:55:55.105879 | orchestrator | 2026-04-16 04:55:55.105887 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-04-16 04:55:55.105896 | orchestrator | Thursday 16 April 2026 04:55:54 +0000 (0:00:00.703) 0:00:12.227 ******** 2026-04-16 04:55:55.105912 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:55.105921 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:55.105929 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:55.105938 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:55.105946 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:55.105955 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:55.105963 | orchestrator | 2026-04-16 04:55:55.105972 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-04-16 04:55:55.105981 | orchestrator | Thursday 16 April 2026 04:55:54 +0000 (0:00:00.150) 0:00:12.378 ******** 2026-04-16 04:55:55.105989 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:55.105998 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:55.106007 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:55.106060 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:55.106079 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:56.320465 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:56.320618 | orchestrator | 2026-04-16 04:55:56.320636 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-04-16 04:55:56.320649 | orchestrator | Thursday 16 April 2026 04:55:55 +0000 (0:00:00.142) 0:00:12.520 ******** 2026-04-16 04:55:56.320661 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:56.320672 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:56.320683 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:56.320694 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:56.320705 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:56.320716 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:56.320727 | orchestrator | 2026-04-16 04:55:56.320738 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-04-16 04:55:56.320749 | orchestrator | Thursday 16 April 2026 04:55:55 +0000 (0:00:00.138) 0:00:12.659 ******** 2026-04-16 04:55:56.320760 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:55:56.320771 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:55:56.320801 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:55:56.320813 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:55:56.320823 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:55:56.320834 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:55:56.320845 | orchestrator | 2026-04-16 04:55:56.320856 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-04-16 04:55:56.320867 | orchestrator | Thursday 16 April 2026 04:55:55 +0000 (0:00:00.646) 0:00:13.305 ******** 2026-04-16 04:55:56.320878 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:55:56.320889 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:55:56.320900 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:55:56.320912 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:55:56.320922 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:55:56.320933 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:55:56.320944 | orchestrator | 2026-04-16 04:55:56.320956 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:55:56.320968 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 04:55:56.320981 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 04:55:56.320992 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 04:55:56.321003 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 04:55:56.321016 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 04:55:56.321050 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 04:55:56.321064 | orchestrator | 2026-04-16 04:55:56.321077 | orchestrator | 2026-04-16 04:55:56.321089 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 04:55:56.321102 | orchestrator | Thursday 16 April 2026 04:55:56 +0000 (0:00:00.213) 0:00:13.518 ******** 2026-04-16 04:55:56.321114 | orchestrator | =============================================================================== 2026-04-16 04:55:56.321126 | orchestrator | Gathering Facts --------------------------------------------------------- 3.46s 2026-04-16 04:55:56.321138 | orchestrator | Do not require tty for all users ---------------------------------------- 1.68s 2026-04-16 04:55:56.321150 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.24s 2026-04-16 04:55:56.321163 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-04-16 04:55:56.321176 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.13s 2026-04-16 04:55:56.321188 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2026-04-16 04:55:56.321201 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2026-04-16 04:55:56.321213 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2026-04-16 04:55:56.321225 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2026-04-16 04:55:56.321238 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-04-16 04:55:56.321250 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-04-16 04:55:56.321262 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-04-16 04:55:56.321275 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-04-16 04:55:56.321287 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-04-16 04:55:56.321301 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-04-16 04:55:56.321313 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-04-16 04:55:56.321325 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2026-04-16 04:55:56.321338 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2026-04-16 04:55:56.321350 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2026-04-16 04:55:56.579492 | orchestrator | + osism apply --environment custom facts 2026-04-16 04:55:58.407485 | orchestrator | 2026-04-16 04:55:58 | INFO  | Trying to run play facts in environment custom 2026-04-16 04:56:08.517673 | orchestrator | 2026-04-16 04:56:08 | INFO  | Task ce2f52a7-2b91-4820-8aaf-5d523598ac6f (facts) was prepared for execution. 2026-04-16 04:56:08.517793 | orchestrator | 2026-04-16 04:56:08 | INFO  | It takes a moment until task ce2f52a7-2b91-4820-8aaf-5d523598ac6f (facts) has been started and output is visible here. 2026-04-16 04:56:52.807263 | orchestrator | 2026-04-16 04:56:52.807385 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-04-16 04:56:52.807403 | orchestrator | 2026-04-16 04:56:52.807415 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-16 04:56:52.807428 | orchestrator | Thursday 16 April 2026 04:56:12 +0000 (0:00:00.079) 0:00:00.079 ******** 2026-04-16 04:56:52.807439 | orchestrator | ok: [testbed-manager] 2026-04-16 04:56:52.807452 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:56:52.807464 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:56:52.807477 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:56:52.807496 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:56:52.807514 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:56:52.807563 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:56:52.807583 | orchestrator | 2026-04-16 04:56:52.807602 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-04-16 04:56:52.807620 | orchestrator | Thursday 16 April 2026 04:56:13 +0000 (0:00:01.398) 0:00:01.478 ******** 2026-04-16 04:56:52.807636 | orchestrator | ok: [testbed-manager] 2026-04-16 04:56:52.807686 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:56:52.807704 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:56:52.807723 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:56:52.807742 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:56:52.807759 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:56:52.807776 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:56:52.807793 | orchestrator | 2026-04-16 04:56:52.807811 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-04-16 04:56:52.807831 | orchestrator | 2026-04-16 04:56:52.807851 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-16 04:56:52.807870 | orchestrator | Thursday 16 April 2026 04:56:14 +0000 (0:00:01.155) 0:00:02.633 ******** 2026-04-16 04:56:52.807888 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.807901 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.807914 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.807926 | orchestrator | 2026-04-16 04:56:52.807938 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-16 04:56:52.807952 | orchestrator | Thursday 16 April 2026 04:56:15 +0000 (0:00:00.082) 0:00:02.715 ******** 2026-04-16 04:56:52.807964 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.807977 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.807989 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.808001 | orchestrator | 2026-04-16 04:56:52.808013 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-16 04:56:52.808026 | orchestrator | Thursday 16 April 2026 04:56:15 +0000 (0:00:00.183) 0:00:02.899 ******** 2026-04-16 04:56:52.808038 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.808050 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.808068 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.808086 | orchestrator | 2026-04-16 04:56:52.808106 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-16 04:56:52.808125 | orchestrator | Thursday 16 April 2026 04:56:15 +0000 (0:00:00.202) 0:00:03.101 ******** 2026-04-16 04:56:52.808145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 04:56:52.808165 | orchestrator | 2026-04-16 04:56:52.808182 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-16 04:56:52.808198 | orchestrator | Thursday 16 April 2026 04:56:15 +0000 (0:00:00.124) 0:00:03.226 ******** 2026-04-16 04:56:52.808216 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.808234 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.808251 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.808267 | orchestrator | 2026-04-16 04:56:52.808284 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-16 04:56:52.808302 | orchestrator | Thursday 16 April 2026 04:56:15 +0000 (0:00:00.421) 0:00:03.648 ******** 2026-04-16 04:56:52.808319 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:56:52.808337 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:56:52.808354 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:56:52.808372 | orchestrator | 2026-04-16 04:56:52.808389 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-16 04:56:52.808406 | orchestrator | Thursday 16 April 2026 04:56:16 +0000 (0:00:00.106) 0:00:03.754 ******** 2026-04-16 04:56:52.808424 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:56:52.808442 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:56:52.808459 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:56:52.808477 | orchestrator | 2026-04-16 04:56:52.808494 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-16 04:56:52.808531 | orchestrator | Thursday 16 April 2026 04:56:17 +0000 (0:00:01.014) 0:00:04.769 ******** 2026-04-16 04:56:52.808549 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.808568 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.808585 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.808602 | orchestrator | 2026-04-16 04:56:52.808620 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-16 04:56:52.808737 | orchestrator | Thursday 16 April 2026 04:56:17 +0000 (0:00:00.439) 0:00:05.209 ******** 2026-04-16 04:56:52.808764 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:56:52.808782 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:56:52.808801 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:56:52.808818 | orchestrator | 2026-04-16 04:56:52.808837 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-16 04:56:52.808855 | orchestrator | Thursday 16 April 2026 04:56:18 +0000 (0:00:01.100) 0:00:06.310 ******** 2026-04-16 04:56:52.808968 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:56:52.808991 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:56:52.809083 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:56:52.809104 | orchestrator | 2026-04-16 04:56:52.809122 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-04-16 04:56:52.809140 | orchestrator | Thursday 16 April 2026 04:56:35 +0000 (0:00:16.945) 0:00:23.256 ******** 2026-04-16 04:56:52.809158 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:56:52.809177 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:56:52.809193 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:56:52.809211 | orchestrator | 2026-04-16 04:56:52.809229 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-04-16 04:56:52.809276 | orchestrator | Thursday 16 April 2026 04:56:35 +0000 (0:00:00.104) 0:00:23.360 ******** 2026-04-16 04:56:52.809294 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:56:52.809311 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:56:52.809329 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:56:52.809346 | orchestrator | 2026-04-16 04:56:52.809376 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-04-16 04:56:52.809395 | orchestrator | Thursday 16 April 2026 04:56:43 +0000 (0:00:08.002) 0:00:31.363 ******** 2026-04-16 04:56:52.809413 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.809431 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.809447 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.809463 | orchestrator | 2026-04-16 04:56:52.809480 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-04-16 04:56:52.809496 | orchestrator | Thursday 16 April 2026 04:56:44 +0000 (0:00:00.474) 0:00:31.838 ******** 2026-04-16 04:56:52.809513 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-04-16 04:56:52.809531 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-04-16 04:56:52.809549 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-04-16 04:56:52.809566 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-04-16 04:56:52.809584 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-04-16 04:56:52.809601 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-04-16 04:56:52.809619 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-04-16 04:56:52.809637 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-04-16 04:56:52.809697 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-04-16 04:56:52.809715 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-04-16 04:56:52.809733 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-04-16 04:56:52.809751 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-04-16 04:56:52.809769 | orchestrator | 2026-04-16 04:56:52.809786 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-16 04:56:52.809824 | orchestrator | Thursday 16 April 2026 04:56:47 +0000 (0:00:03.525) 0:00:35.363 ******** 2026-04-16 04:56:52.809843 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.809862 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.809881 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.809900 | orchestrator | 2026-04-16 04:56:52.809918 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 04:56:52.809935 | orchestrator | 2026-04-16 04:56:52.809953 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 04:56:52.809971 | orchestrator | Thursday 16 April 2026 04:56:49 +0000 (0:00:01.371) 0:00:36.735 ******** 2026-04-16 04:56:52.809989 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:56:52.810006 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:56:52.810102 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:56:52.810121 | orchestrator | ok: [testbed-manager] 2026-04-16 04:56:52.810139 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:56:52.810156 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:56:52.810174 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:56:52.810210 | orchestrator | 2026-04-16 04:56:52.810974 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 04:56:52.811000 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 04:56:52.811015 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 04:56:52.811029 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 04:56:52.811042 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 04:56:52.811055 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 04:56:52.811068 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 04:56:52.811081 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 04:56:52.811092 | orchestrator | 2026-04-16 04:56:52.811103 | orchestrator | 2026-04-16 04:56:52.811114 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 04:56:52.811125 | orchestrator | Thursday 16 April 2026 04:56:52 +0000 (0:00:03.730) 0:00:40.465 ******** 2026-04-16 04:56:52.811135 | orchestrator | =============================================================================== 2026-04-16 04:56:52.811146 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.95s 2026-04-16 04:56:52.811157 | orchestrator | Install required packages (Debian) -------------------------------------- 8.00s 2026-04-16 04:56:52.811168 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.73s 2026-04-16 04:56:52.811178 | orchestrator | Copy fact files --------------------------------------------------------- 3.53s 2026-04-16 04:56:52.811189 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2026-04-16 04:56:52.811200 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.37s 2026-04-16 04:56:52.811231 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2026-04-16 04:56:53.004891 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.10s 2026-04-16 04:56:53.005005 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.01s 2026-04-16 04:56:53.005044 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-04-16 04:56:53.005080 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-04-16 04:56:53.005094 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.42s 2026-04-16 04:56:53.005106 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-04-16 04:56:53.005118 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.18s 2026-04-16 04:56:53.005132 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-04-16 04:56:53.005146 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2026-04-16 04:56:53.005159 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-04-16 04:56:53.005173 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-04-16 04:56:53.269535 | orchestrator | + osism apply bootstrap 2026-04-16 04:57:05.273427 | orchestrator | 2026-04-16 04:57:05 | INFO  | Task 5ddb75f5-3255-4259-95ab-16d23e0f9440 (bootstrap) was prepared for execution. 2026-04-16 04:57:05.273545 | orchestrator | 2026-04-16 04:57:05 | INFO  | It takes a moment until task 5ddb75f5-3255-4259-95ab-16d23e0f9440 (bootstrap) has been started and output is visible here. 2026-04-16 04:57:19.921605 | orchestrator | 2026-04-16 04:57:19.921772 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-04-16 04:57:19.921793 | orchestrator | 2026-04-16 04:57:19.921806 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-04-16 04:57:19.921818 | orchestrator | Thursday 16 April 2026 04:57:09 +0000 (0:00:00.110) 0:00:00.110 ******** 2026-04-16 04:57:19.921829 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:19.921842 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:19.921853 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:19.921864 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:19.921875 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:19.921886 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:19.921897 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:19.921908 | orchestrator | 2026-04-16 04:57:19.921920 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 04:57:19.921931 | orchestrator | 2026-04-16 04:57:19.921942 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 04:57:19.921953 | orchestrator | Thursday 16 April 2026 04:57:09 +0000 (0:00:00.185) 0:00:00.296 ******** 2026-04-16 04:57:19.921964 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:19.921974 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:19.921985 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:19.921996 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:19.922007 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:19.922081 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:19.922093 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:19.922104 | orchestrator | 2026-04-16 04:57:19.922115 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-04-16 04:57:19.922126 | orchestrator | 2026-04-16 04:57:19.922139 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 04:57:19.922152 | orchestrator | Thursday 16 April 2026 04:57:12 +0000 (0:00:03.699) 0:00:03.995 ******** 2026-04-16 04:57:19.922165 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-04-16 04:57:19.922179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-04-16 04:57:19.922191 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-04-16 04:57:19.922203 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-04-16 04:57:19.922216 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-04-16 04:57:19.922229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 04:57:19.922241 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-04-16 04:57:19.922254 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-16 04:57:19.922290 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 04:57:19.922303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-04-16 04:57:19.922316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 04:57:19.922328 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 04:57:19.922340 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-04-16 04:57:19.922353 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-16 04:57:19.922366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 04:57:19.922379 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 04:57:19.922391 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 04:57:19.922430 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 04:57:19.922444 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-04-16 04:57:19.922456 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 04:57:19.922468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 04:57:19.922481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 04:57:19.922494 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-16 04:57:19.922505 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:19.922516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 04:57:19.922527 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 04:57:19.922537 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 04:57:19.922548 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 04:57:19.922559 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 04:57:19.922570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 04:57:19.922581 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-04-16 04:57:19.922592 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 04:57:19.922603 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:57:19.922614 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 04:57:19.922625 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 04:57:19.922636 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 04:57:19.922646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 04:57:19.922657 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 04:57:19.922668 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 04:57:19.922712 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 04:57:19.922725 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:57:19.922736 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:57:19.922747 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 04:57:19.922758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 04:57:19.922769 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 04:57:19.922779 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 04:57:19.922809 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 04:57:19.922821 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 04:57:19.922832 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 04:57:19.922860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 04:57:19.922871 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:57:19.922882 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 04:57:19.922893 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:57:19.922904 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 04:57:19.922923 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 04:57:19.922934 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:57:19.922944 | orchestrator | 2026-04-16 04:57:19.922955 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-04-16 04:57:19.922966 | orchestrator | 2026-04-16 04:57:19.922977 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-04-16 04:57:19.922988 | orchestrator | Thursday 16 April 2026 04:57:13 +0000 (0:00:00.361) 0:00:04.357 ******** 2026-04-16 04:57:19.922998 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:19.923009 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:19.923020 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:19.923030 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:19.923041 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:19.923051 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:19.923062 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:19.923073 | orchestrator | 2026-04-16 04:57:19.923083 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-04-16 04:57:19.923094 | orchestrator | Thursday 16 April 2026 04:57:14 +0000 (0:00:01.155) 0:00:05.512 ******** 2026-04-16 04:57:19.923105 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:19.923115 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:19.923126 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:19.923136 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:19.923147 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:19.923157 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:19.923168 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:19.923179 | orchestrator | 2026-04-16 04:57:19.923190 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-04-16 04:57:19.923201 | orchestrator | Thursday 16 April 2026 04:57:15 +0000 (0:00:01.068) 0:00:06.581 ******** 2026-04-16 04:57:19.923213 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:57:19.923226 | orchestrator | 2026-04-16 04:57:19.923237 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-04-16 04:57:19.923247 | orchestrator | Thursday 16 April 2026 04:57:15 +0000 (0:00:00.219) 0:00:06.800 ******** 2026-04-16 04:57:19.923258 | orchestrator | changed: [testbed-manager] 2026-04-16 04:57:19.923269 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:19.923279 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:57:19.923290 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:19.923301 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:19.923311 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:57:19.923322 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:57:19.923332 | orchestrator | 2026-04-16 04:57:19.923343 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-04-16 04:57:19.923353 | orchestrator | Thursday 16 April 2026 04:57:17 +0000 (0:00:01.790) 0:00:08.590 ******** 2026-04-16 04:57:19.923364 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:19.923376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:57:19.923388 | orchestrator | 2026-04-16 04:57:19.923399 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-04-16 04:57:19.923409 | orchestrator | Thursday 16 April 2026 04:57:17 +0000 (0:00:00.234) 0:00:08.824 ******** 2026-04-16 04:57:19.923420 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:57:19.923431 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:19.923441 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:57:19.923452 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:19.923463 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:57:19.923473 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:19.923490 | orchestrator | 2026-04-16 04:57:19.923506 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-04-16 04:57:19.923517 | orchestrator | Thursday 16 April 2026 04:57:18 +0000 (0:00:00.980) 0:00:09.804 ******** 2026-04-16 04:57:19.923528 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:19.923538 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:19.923549 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:19.923560 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:19.923570 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:57:19.923581 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:57:19.923591 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:57:19.923602 | orchestrator | 2026-04-16 04:57:19.923612 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-04-16 04:57:19.923623 | orchestrator | Thursday 16 April 2026 04:57:19 +0000 (0:00:00.565) 0:00:10.370 ******** 2026-04-16 04:57:19.923633 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:57:19.923644 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:57:19.923655 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:57:19.923665 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:57:19.923676 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:57:19.923744 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:57:19.923762 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:19.923781 | orchestrator | 2026-04-16 04:57:19.923793 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-04-16 04:57:19.923805 | orchestrator | Thursday 16 April 2026 04:57:19 +0000 (0:00:00.413) 0:00:10.784 ******** 2026-04-16 04:57:19.923816 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:19.923826 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:57:19.923845 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:57:31.269566 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:57:31.269679 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:57:31.269694 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:57:31.269751 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:57:31.269763 | orchestrator | 2026-04-16 04:57:31.269776 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-04-16 04:57:31.269789 | orchestrator | Thursday 16 April 2026 04:57:19 +0000 (0:00:00.222) 0:00:11.007 ******** 2026-04-16 04:57:31.269801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:57:31.269831 | orchestrator | 2026-04-16 04:57:31.269842 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-04-16 04:57:31.269854 | orchestrator | Thursday 16 April 2026 04:57:20 +0000 (0:00:00.318) 0:00:11.325 ******** 2026-04-16 04:57:31.269865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:57:31.269877 | orchestrator | 2026-04-16 04:57:31.269888 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-04-16 04:57:31.269899 | orchestrator | Thursday 16 April 2026 04:57:20 +0000 (0:00:00.292) 0:00:11.618 ******** 2026-04-16 04:57:31.269910 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.269922 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.269933 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.269943 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.269955 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.269966 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.269977 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.269988 | orchestrator | 2026-04-16 04:57:31.269999 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-04-16 04:57:31.270010 | orchestrator | Thursday 16 April 2026 04:57:21 +0000 (0:00:01.391) 0:00:13.010 ******** 2026-04-16 04:57:31.270103 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:31.270118 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:57:31.270131 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:57:31.270142 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:57:31.270154 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:57:31.270167 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:57:31.270178 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:57:31.270190 | orchestrator | 2026-04-16 04:57:31.270203 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-04-16 04:57:31.270216 | orchestrator | Thursday 16 April 2026 04:57:22 +0000 (0:00:00.269) 0:00:13.279 ******** 2026-04-16 04:57:31.270228 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.270241 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.270253 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.270265 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.270277 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.270289 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.270301 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.270313 | orchestrator | 2026-04-16 04:57:31.270325 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-04-16 04:57:31.270337 | orchestrator | Thursday 16 April 2026 04:57:22 +0000 (0:00:00.525) 0:00:13.804 ******** 2026-04-16 04:57:31.270349 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:31.270361 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:57:31.270373 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:57:31.270384 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:57:31.270396 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:57:31.270408 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:57:31.270421 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:57:31.270433 | orchestrator | 2026-04-16 04:57:31.270444 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-04-16 04:57:31.270456 | orchestrator | Thursday 16 April 2026 04:57:23 +0000 (0:00:00.227) 0:00:14.032 ******** 2026-04-16 04:57:31.270467 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.270477 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:57:31.270488 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:57:31.270499 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:57:31.270509 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:31.270528 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:31.270539 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:31.270550 | orchestrator | 2026-04-16 04:57:31.270561 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-04-16 04:57:31.270572 | orchestrator | Thursday 16 April 2026 04:57:23 +0000 (0:00:00.511) 0:00:14.544 ******** 2026-04-16 04:57:31.270582 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.270593 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:57:31.270604 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:57:31.270614 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:31.270625 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:31.270635 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:57:31.270646 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:31.270656 | orchestrator | 2026-04-16 04:57:31.270667 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-04-16 04:57:31.270678 | orchestrator | Thursday 16 April 2026 04:57:24 +0000 (0:00:01.083) 0:00:15.628 ******** 2026-04-16 04:57:31.270688 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.270730 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.270742 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.270753 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.270764 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.270774 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.270785 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.270795 | orchestrator | 2026-04-16 04:57:31.270814 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-04-16 04:57:31.270825 | orchestrator | Thursday 16 April 2026 04:57:25 +0000 (0:00:01.029) 0:00:16.657 ******** 2026-04-16 04:57:31.270856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:57:31.270869 | orchestrator | 2026-04-16 04:57:31.270880 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-04-16 04:57:31.270890 | orchestrator | Thursday 16 April 2026 04:57:25 +0000 (0:00:00.265) 0:00:16.923 ******** 2026-04-16 04:57:31.270901 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:31.270912 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:31.270923 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:31.270934 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:57:31.270944 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:57:31.270955 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:31.270966 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:57:31.270976 | orchestrator | 2026-04-16 04:57:31.270987 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-04-16 04:57:31.270998 | orchestrator | Thursday 16 April 2026 04:57:27 +0000 (0:00:01.244) 0:00:18.167 ******** 2026-04-16 04:57:31.271009 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271020 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271030 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271041 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271052 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.271062 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.271073 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.271084 | orchestrator | 2026-04-16 04:57:31.271095 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-04-16 04:57:31.271105 | orchestrator | Thursday 16 April 2026 04:57:27 +0000 (0:00:00.195) 0:00:18.363 ******** 2026-04-16 04:57:31.271116 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271127 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271138 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271148 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271159 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.271170 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.271180 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.271191 | orchestrator | 2026-04-16 04:57:31.271201 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-04-16 04:57:31.271212 | orchestrator | Thursday 16 April 2026 04:57:27 +0000 (0:00:00.197) 0:00:18.560 ******** 2026-04-16 04:57:31.271223 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271234 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271244 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271255 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271265 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.271276 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.271286 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.271297 | orchestrator | 2026-04-16 04:57:31.271308 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-04-16 04:57:31.271319 | orchestrator | Thursday 16 April 2026 04:57:27 +0000 (0:00:00.191) 0:00:18.752 ******** 2026-04-16 04:57:31.271330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:57:31.271343 | orchestrator | 2026-04-16 04:57:31.271354 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-04-16 04:57:31.271364 | orchestrator | Thursday 16 April 2026 04:57:27 +0000 (0:00:00.258) 0:00:19.010 ******** 2026-04-16 04:57:31.271375 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271386 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271403 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271414 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271425 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.271436 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.271446 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.271457 | orchestrator | 2026-04-16 04:57:31.271468 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-04-16 04:57:31.271479 | orchestrator | Thursday 16 April 2026 04:57:28 +0000 (0:00:00.511) 0:00:19.521 ******** 2026-04-16 04:57:31.271490 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:57:31.271500 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:57:31.271511 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:57:31.271522 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:57:31.271532 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:57:31.271543 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:57:31.271554 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:57:31.271564 | orchestrator | 2026-04-16 04:57:31.271575 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-04-16 04:57:31.271587 | orchestrator | Thursday 16 April 2026 04:57:28 +0000 (0:00:00.222) 0:00:19.744 ******** 2026-04-16 04:57:31.271597 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271608 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271619 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271630 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:57:31.271640 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271651 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:57:31.271662 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:57:31.271673 | orchestrator | 2026-04-16 04:57:31.271684 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-04-16 04:57:31.271695 | orchestrator | Thursday 16 April 2026 04:57:29 +0000 (0:00:00.981) 0:00:20.725 ******** 2026-04-16 04:57:31.271723 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271734 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271745 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271755 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:57:31.271776 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271787 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:57:31.271798 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:57:31.271808 | orchestrator | 2026-04-16 04:57:31.271819 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-04-16 04:57:31.271831 | orchestrator | Thursday 16 April 2026 04:57:30 +0000 (0:00:00.544) 0:00:21.270 ******** 2026-04-16 04:57:31.271842 | orchestrator | ok: [testbed-manager] 2026-04-16 04:57:31.271853 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:57:31.271863 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:57:31.271874 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:57:31.271892 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:58:10.193029 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:58:10.193164 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:58:10.193182 | orchestrator | 2026-04-16 04:58:10.193195 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-04-16 04:58:10.193208 | orchestrator | Thursday 16 April 2026 04:57:31 +0000 (0:00:00.997) 0:00:22.267 ******** 2026-04-16 04:58:10.193219 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.193231 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.193242 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.193253 | orchestrator | changed: [testbed-manager] 2026-04-16 04:58:10.193265 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:58:10.193276 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:58:10.193286 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:58:10.193297 | orchestrator | 2026-04-16 04:58:10.193308 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-04-16 04:58:10.193319 | orchestrator | Thursday 16 April 2026 04:57:48 +0000 (0:00:17.162) 0:00:39.430 ******** 2026-04-16 04:58:10.193357 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.193369 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.193380 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.193390 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.193401 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.193411 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.193422 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.193433 | orchestrator | 2026-04-16 04:58:10.193443 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-04-16 04:58:10.193454 | orchestrator | Thursday 16 April 2026 04:57:48 +0000 (0:00:00.216) 0:00:39.647 ******** 2026-04-16 04:58:10.193465 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.193476 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.193487 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.193497 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.193508 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.193518 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.193529 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.193541 | orchestrator | 2026-04-16 04:58:10.193553 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-04-16 04:58:10.193566 | orchestrator | Thursday 16 April 2026 04:57:48 +0000 (0:00:00.204) 0:00:39.851 ******** 2026-04-16 04:58:10.193579 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.193591 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.193603 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.193614 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.193626 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.193638 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.193651 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.193663 | orchestrator | 2026-04-16 04:58:10.193676 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-04-16 04:58:10.193689 | orchestrator | Thursday 16 April 2026 04:57:49 +0000 (0:00:00.195) 0:00:40.046 ******** 2026-04-16 04:58:10.193702 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:58:10.193717 | orchestrator | 2026-04-16 04:58:10.193730 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-04-16 04:58:10.193742 | orchestrator | Thursday 16 April 2026 04:57:49 +0000 (0:00:00.244) 0:00:40.291 ******** 2026-04-16 04:58:10.193786 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.193801 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.193814 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.193825 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.193837 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.193849 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.193862 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.193873 | orchestrator | 2026-04-16 04:58:10.193886 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-04-16 04:58:10.193898 | orchestrator | Thursday 16 April 2026 04:57:50 +0000 (0:00:01.700) 0:00:41.992 ******** 2026-04-16 04:58:10.193911 | orchestrator | changed: [testbed-manager] 2026-04-16 04:58:10.193921 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:58:10.193932 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:58:10.193943 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:58:10.193953 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:58:10.193965 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:58:10.193976 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:58:10.193987 | orchestrator | 2026-04-16 04:58:10.193997 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-04-16 04:58:10.194099 | orchestrator | Thursday 16 April 2026 04:57:52 +0000 (0:00:01.070) 0:00:43.062 ******** 2026-04-16 04:58:10.194126 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.194145 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.194164 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.194188 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.194199 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.194210 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.194220 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.194231 | orchestrator | 2026-04-16 04:58:10.194242 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-04-16 04:58:10.194252 | orchestrator | Thursday 16 April 2026 04:57:52 +0000 (0:00:00.774) 0:00:43.836 ******** 2026-04-16 04:58:10.194264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:58:10.194277 | orchestrator | 2026-04-16 04:58:10.194287 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-04-16 04:58:10.194299 | orchestrator | Thursday 16 April 2026 04:57:53 +0000 (0:00:00.266) 0:00:44.103 ******** 2026-04-16 04:58:10.194310 | orchestrator | changed: [testbed-manager] 2026-04-16 04:58:10.194320 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:58:10.194331 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:58:10.194341 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:58:10.194352 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:58:10.194362 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:58:10.194373 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:58:10.194383 | orchestrator | 2026-04-16 04:58:10.194413 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-04-16 04:58:10.194424 | orchestrator | Thursday 16 April 2026 04:57:54 +0000 (0:00:00.946) 0:00:45.049 ******** 2026-04-16 04:58:10.194435 | orchestrator | skipping: [testbed-manager] 2026-04-16 04:58:10.194446 | orchestrator | skipping: [testbed-node-3] 2026-04-16 04:58:10.194456 | orchestrator | skipping: [testbed-node-4] 2026-04-16 04:58:10.194467 | orchestrator | skipping: [testbed-node-5] 2026-04-16 04:58:10.194478 | orchestrator | skipping: [testbed-node-0] 2026-04-16 04:58:10.194488 | orchestrator | skipping: [testbed-node-1] 2026-04-16 04:58:10.194499 | orchestrator | skipping: [testbed-node-2] 2026-04-16 04:58:10.194509 | orchestrator | 2026-04-16 04:58:10.194520 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-04-16 04:58:10.194531 | orchestrator | Thursday 16 April 2026 04:57:54 +0000 (0:00:00.232) 0:00:45.282 ******** 2026-04-16 04:58:10.194542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:58:10.194553 | orchestrator | 2026-04-16 04:58:10.194564 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-04-16 04:58:10.194574 | orchestrator | Thursday 16 April 2026 04:57:54 +0000 (0:00:00.299) 0:00:45.581 ******** 2026-04-16 04:58:10.194585 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.194595 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.194606 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.194616 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.194627 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.194638 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.194648 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.194659 | orchestrator | 2026-04-16 04:58:10.194670 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-04-16 04:58:10.194681 | orchestrator | Thursday 16 April 2026 04:57:56 +0000 (0:00:01.841) 0:00:47.422 ******** 2026-04-16 04:58:10.194691 | orchestrator | changed: [testbed-manager] 2026-04-16 04:58:10.194702 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:58:10.194713 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:58:10.194723 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:58:10.194734 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:58:10.194744 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:58:10.194796 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:58:10.194818 | orchestrator | 2026-04-16 04:58:10.194829 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-04-16 04:58:10.194840 | orchestrator | Thursday 16 April 2026 04:57:57 +0000 (0:00:01.120) 0:00:48.542 ******** 2026-04-16 04:58:10.194851 | orchestrator | changed: [testbed-node-4] 2026-04-16 04:58:10.194862 | orchestrator | changed: [testbed-node-2] 2026-04-16 04:58:10.194873 | orchestrator | changed: [testbed-node-1] 2026-04-16 04:58:10.194884 | orchestrator | changed: [testbed-node-0] 2026-04-16 04:58:10.194895 | orchestrator | changed: [testbed-node-3] 2026-04-16 04:58:10.194905 | orchestrator | changed: [testbed-node-5] 2026-04-16 04:58:10.194916 | orchestrator | changed: [testbed-manager] 2026-04-16 04:58:10.194927 | orchestrator | 2026-04-16 04:58:10.194938 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-04-16 04:58:10.194949 | orchestrator | Thursday 16 April 2026 04:58:07 +0000 (0:00:10.022) 0:00:58.565 ******** 2026-04-16 04:58:10.194959 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.194970 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.194981 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.194992 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.195003 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.195013 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.195024 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.195034 | orchestrator | 2026-04-16 04:58:10.195045 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-04-16 04:58:10.195056 | orchestrator | Thursday 16 April 2026 04:58:08 +0000 (0:00:01.085) 0:00:59.651 ******** 2026-04-16 04:58:10.195067 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.195077 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.195088 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.195099 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.195109 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.195120 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.195130 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.195141 | orchestrator | 2026-04-16 04:58:10.195152 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-04-16 04:58:10.195169 | orchestrator | Thursday 16 April 2026 04:58:09 +0000 (0:00:00.858) 0:01:00.510 ******** 2026-04-16 04:58:10.195180 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.195191 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.195201 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.195212 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.195223 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.195233 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.195243 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.195254 | orchestrator | 2026-04-16 04:58:10.195265 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-04-16 04:58:10.195276 | orchestrator | Thursday 16 April 2026 04:58:09 +0000 (0:00:00.223) 0:01:00.733 ******** 2026-04-16 04:58:10.195286 | orchestrator | ok: [testbed-manager] 2026-04-16 04:58:10.195297 | orchestrator | ok: [testbed-node-3] 2026-04-16 04:58:10.195307 | orchestrator | ok: [testbed-node-4] 2026-04-16 04:58:10.195318 | orchestrator | ok: [testbed-node-5] 2026-04-16 04:58:10.195328 | orchestrator | ok: [testbed-node-0] 2026-04-16 04:58:10.195339 | orchestrator | ok: [testbed-node-1] 2026-04-16 04:58:10.195349 | orchestrator | ok: [testbed-node-2] 2026-04-16 04:58:10.195360 | orchestrator | 2026-04-16 04:58:10.195371 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-04-16 04:58:10.195382 | orchestrator | Thursday 16 April 2026 04:58:09 +0000 (0:00:00.188) 0:01:00.922 ******** 2026-04-16 04:58:10.195393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 04:58:10.195404 | orchestrator | 2026-04-16 04:58:10.195422 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-04-16 05:00:27.774286 | orchestrator | Thursday 16 April 2026 04:58:10 +0000 (0:00:00.269) 0:01:01.191 ******** 2026-04-16 05:00:27.774408 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:27.774427 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.774441 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.774452 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.774464 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.774475 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.774487 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.774499 | orchestrator | 2026-04-16 05:00:27.774512 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-04-16 05:00:27.774524 | orchestrator | Thursday 16 April 2026 04:58:11 +0000 (0:00:01.809) 0:01:03.001 ******** 2026-04-16 05:00:27.774536 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:27.774549 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:27.774561 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:27.774572 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:27.774583 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:27.774595 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:27.774607 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:27.774618 | orchestrator | 2026-04-16 05:00:27.774630 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-04-16 05:00:27.774643 | orchestrator | Thursday 16 April 2026 04:58:12 +0000 (0:00:00.585) 0:01:03.587 ******** 2026-04-16 05:00:27.774654 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:27.774666 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.774678 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.774689 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.774701 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.774712 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.774723 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.774735 | orchestrator | 2026-04-16 05:00:27.774748 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-04-16 05:00:27.774760 | orchestrator | Thursday 16 April 2026 04:58:12 +0000 (0:00:00.209) 0:01:03.796 ******** 2026-04-16 05:00:27.774772 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:27.774784 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.774796 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.774807 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.774820 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.774834 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.774847 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.774859 | orchestrator | 2026-04-16 05:00:27.774872 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-04-16 05:00:27.774887 | orchestrator | Thursday 16 April 2026 04:58:13 +0000 (0:00:01.162) 0:01:04.958 ******** 2026-04-16 05:00:27.774900 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:27.774913 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:27.774926 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:27.774962 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:27.774976 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:27.774990 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:27.775003 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:27.775016 | orchestrator | 2026-04-16 05:00:27.775035 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-04-16 05:00:27.775049 | orchestrator | Thursday 16 April 2026 04:58:15 +0000 (0:00:01.905) 0:01:06.864 ******** 2026-04-16 05:00:27.775062 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:27.775074 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.775087 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.775100 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.775112 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.775126 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.775139 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.775152 | orchestrator | 2026-04-16 05:00:27.775165 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-04-16 05:00:27.775204 | orchestrator | Thursday 16 April 2026 04:58:18 +0000 (0:00:02.607) 0:01:09.471 ******** 2026-04-16 05:00:27.775216 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:27.775227 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.775239 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.775250 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.775261 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.775272 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.775283 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.775295 | orchestrator | 2026-04-16 05:00:27.775306 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-04-16 05:00:27.775317 | orchestrator | Thursday 16 April 2026 04:58:56 +0000 (0:00:37.748) 0:01:47.220 ******** 2026-04-16 05:00:27.775329 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:27.775340 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:27.775352 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:27.775363 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:27.775374 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:27.775386 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:27.775397 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:27.775408 | orchestrator | 2026-04-16 05:00:27.775420 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-04-16 05:00:27.775431 | orchestrator | Thursday 16 April 2026 05:00:15 +0000 (0:01:19.183) 0:03:06.404 ******** 2026-04-16 05:00:27.775443 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:27.775455 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.775466 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.775478 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.775489 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.775500 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.775512 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.775523 | orchestrator | 2026-04-16 05:00:27.775534 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-04-16 05:00:27.775546 | orchestrator | Thursday 16 April 2026 05:00:17 +0000 (0:00:01.757) 0:03:08.161 ******** 2026-04-16 05:00:27.775557 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:27.775568 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:27.775579 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:27.775591 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:27.775602 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:27.775613 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:27.775624 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:27.775635 | orchestrator | 2026-04-16 05:00:27.775647 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-04-16 05:00:27.775658 | orchestrator | Thursday 16 April 2026 05:00:26 +0000 (0:00:09.583) 0:03:17.745 ******** 2026-04-16 05:00:27.775705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-04-16 05:00:27.775740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-04-16 05:00:27.775764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-04-16 05:00:27.775778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-16 05:00:27.775790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-04-16 05:00:27.775801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-04-16 05:00:27.775812 | orchestrator | 2026-04-16 05:00:27.775824 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-04-16 05:00:27.775835 | orchestrator | Thursday 16 April 2026 05:00:27 +0000 (0:00:00.329) 0:03:18.075 ******** 2026-04-16 05:00:27.775846 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-16 05:00:27.775857 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:27.775869 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-16 05:00:27.775880 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:00:27.775891 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-16 05:00:27.775907 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-04-16 05:00:27.775919 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:00:27.775930 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:00:27.775958 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 05:00:27.775969 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 05:00:27.775980 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 05:00:27.775991 | orchestrator | 2026-04-16 05:00:27.776003 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-04-16 05:00:27.776014 | orchestrator | Thursday 16 April 2026 05:00:27 +0000 (0:00:00.617) 0:03:18.693 ******** 2026-04-16 05:00:27.776024 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-16 05:00:27.776037 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-16 05:00:27.776048 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-16 05:00:27.776059 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-16 05:00:27.776070 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-16 05:00:27.776089 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-16 05:00:35.241174 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-16 05:00:35.241259 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-16 05:00:35.241287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-16 05:00:35.241294 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-16 05:00:35.241301 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-16 05:00:35.241307 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-16 05:00:35.241314 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-16 05:00:35.241320 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-16 05:00:35.241326 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-16 05:00:35.241332 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-16 05:00:35.241339 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-16 05:00:35.241345 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-16 05:00:35.241352 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-16 05:00:35.241358 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:35.241366 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-16 05:00:35.241372 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-16 05:00:35.241378 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-16 05:00:35.241384 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-16 05:00:35.241390 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-16 05:00:35.241397 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-16 05:00:35.241403 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-16 05:00:35.241418 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-16 05:00:35.241425 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-04-16 05:00:35.241431 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-16 05:00:35.241437 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-04-16 05:00:35.241443 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-16 05:00:35.241449 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-04-16 05:00:35.241455 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-16 05:00:35.241462 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-04-16 05:00:35.241468 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:00:35.241486 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-04-16 05:00:35.241492 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-04-16 05:00:35.241499 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-04-16 05:00:35.241505 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-04-16 05:00:35.241511 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-04-16 05:00:35.241522 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:00:35.241529 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-04-16 05:00:35.241535 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:00:35.241541 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-16 05:00:35.241547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-16 05:00:35.241553 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-04-16 05:00:35.241559 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-16 05:00:35.241565 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-16 05:00:35.241583 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-04-16 05:00:35.241590 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-16 05:00:35.241596 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-16 05:00:35.241602 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-04-16 05:00:35.241608 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-16 05:00:35.241614 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-16 05:00:35.241620 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-04-16 05:00:35.241627 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-16 05:00:35.241633 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-16 05:00:35.241639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-04-16 05:00:35.241645 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-16 05:00:35.241651 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-16 05:00:35.241657 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-04-16 05:00:35.241663 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-16 05:00:35.241670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-16 05:00:35.241676 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-16 05:00:35.241682 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-16 05:00:35.241688 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-16 05:00:35.241694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-16 05:00:35.241700 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-16 05:00:35.241706 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-16 05:00:35.241713 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-04-16 05:00:35.241719 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-04-16 05:00:35.241726 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-04-16 05:00:35.241734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-04-16 05:00:35.241745 | orchestrator | 2026-04-16 05:00:35.241753 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-04-16 05:00:35.241761 | orchestrator | Thursday 16 April 2026 05:00:32 +0000 (0:00:04.610) 0:03:23.304 ******** 2026-04-16 05:00:35.241768 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241775 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241782 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241799 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241807 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241814 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-04-16 05:00:35.241821 | orchestrator | 2026-04-16 05:00:35.241828 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-04-16 05:00:35.241835 | orchestrator | Thursday 16 April 2026 05:00:33 +0000 (0:00:01.473) 0:03:24.777 ******** 2026-04-16 05:00:35.241843 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:35.241850 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:35.241857 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:35.241864 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:00:35.241871 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:35.241878 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:00:35.241885 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:35.241893 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:00:35.241900 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-16 05:00:35.241907 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-16 05:00:35.241918 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-16 05:00:48.007213 | orchestrator | 2026-04-16 05:00:48.007347 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-04-16 05:00:48.007377 | orchestrator | Thursday 16 April 2026 05:00:35 +0000 (0:00:01.464) 0:03:26.242 ******** 2026-04-16 05:00:48.007397 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:48.007417 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:48.007437 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:48.007456 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:48.007475 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:00:48.007494 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-04-16 05:00:48.007513 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:00:48.007531 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:00:48.007549 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-16 05:00:48.007569 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-16 05:00:48.007589 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-04-16 05:00:48.007607 | orchestrator | 2026-04-16 05:00:48.007628 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-04-16 05:00:48.007667 | orchestrator | Thursday 16 April 2026 05:00:35 +0000 (0:00:00.567) 0:03:26.810 ******** 2026-04-16 05:00:48.007678 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-16 05:00:48.007690 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:48.007701 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-16 05:00:48.007712 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-16 05:00:48.007723 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:00:48.007734 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:00:48.007748 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-04-16 05:00:48.007762 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:00:48.007776 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-16 05:00:48.007789 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-16 05:00:48.007802 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-04-16 05:00:48.007815 | orchestrator | 2026-04-16 05:00:48.007828 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-04-16 05:00:48.007842 | orchestrator | Thursday 16 April 2026 05:00:36 +0000 (0:00:00.559) 0:03:27.369 ******** 2026-04-16 05:00:48.007856 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:48.007869 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:00:48.007881 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:00:48.007893 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:00:48.007906 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:00:48.007919 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:00:48.007931 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:00:48.007944 | orchestrator | 2026-04-16 05:00:48.007957 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-04-16 05:00:48.007999 | orchestrator | Thursday 16 April 2026 05:00:36 +0000 (0:00:00.255) 0:03:27.624 ******** 2026-04-16 05:00:48.008012 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:48.008026 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:48.008040 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:48.008053 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:48.008065 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:48.008079 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:48.008092 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:48.008103 | orchestrator | 2026-04-16 05:00:48.008114 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-04-16 05:00:48.008125 | orchestrator | Thursday 16 April 2026 05:00:42 +0000 (0:00:05.794) 0:03:33.418 ******** 2026-04-16 05:00:48.008137 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-04-16 05:00:48.008148 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-04-16 05:00:48.008159 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:48.008170 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-04-16 05:00:48.008181 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:00:48.008192 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-04-16 05:00:48.008202 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:00:48.008213 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:00:48.008225 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-04-16 05:00:48.008236 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-04-16 05:00:48.008266 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:00:48.008278 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:00:48.008289 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-04-16 05:00:48.008300 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:00:48.008323 | orchestrator | 2026-04-16 05:00:48.008334 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-04-16 05:00:48.008345 | orchestrator | Thursday 16 April 2026 05:00:42 +0000 (0:00:00.261) 0:03:33.680 ******** 2026-04-16 05:00:48.008356 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-04-16 05:00:48.008367 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-04-16 05:00:48.008378 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-04-16 05:00:48.008410 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-04-16 05:00:48.008422 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-04-16 05:00:48.008433 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-04-16 05:00:48.008444 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-04-16 05:00:48.008455 | orchestrator | 2026-04-16 05:00:48.008466 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-04-16 05:00:48.008477 | orchestrator | Thursday 16 April 2026 05:00:43 +0000 (0:00:01.045) 0:03:34.725 ******** 2026-04-16 05:00:48.008489 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:00:48.008503 | orchestrator | 2026-04-16 05:00:48.008514 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-04-16 05:00:48.008525 | orchestrator | Thursday 16 April 2026 05:00:44 +0000 (0:00:00.383) 0:03:35.109 ******** 2026-04-16 05:00:48.008536 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:48.008547 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:48.008558 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:48.008569 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:48.008580 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:48.008590 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:48.008601 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:48.008612 | orchestrator | 2026-04-16 05:00:48.008623 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-04-16 05:00:48.008634 | orchestrator | Thursday 16 April 2026 05:00:45 +0000 (0:00:01.209) 0:03:36.318 ******** 2026-04-16 05:00:48.008645 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:48.008656 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:48.008667 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:48.008677 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:48.008688 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:48.008699 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:48.008709 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:48.008720 | orchestrator | 2026-04-16 05:00:48.008732 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-04-16 05:00:48.008742 | orchestrator | Thursday 16 April 2026 05:00:45 +0000 (0:00:00.630) 0:03:36.949 ******** 2026-04-16 05:00:48.008753 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:48.008765 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:48.008776 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:48.008786 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:48.008797 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:48.008808 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:48.008819 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:48.008830 | orchestrator | 2026-04-16 05:00:48.008840 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-04-16 05:00:48.008851 | orchestrator | Thursday 16 April 2026 05:00:46 +0000 (0:00:00.606) 0:03:37.555 ******** 2026-04-16 05:00:48.008862 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:48.008874 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:48.008884 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:48.008895 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:48.008906 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:48.008917 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:48.008927 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:48.008938 | orchestrator | 2026-04-16 05:00:48.008949 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-04-16 05:00:48.008995 | orchestrator | Thursday 16 April 2026 05:00:47 +0000 (0:00:00.535) 0:03:38.091 ******** 2026-04-16 05:00:48.009016 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314074.3510587, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:48.009032 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314108.4085839, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:48.009044 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314132.2748346, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:48.009079 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314117.7669656, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652425 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314123.817375, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652533 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314104.9872468, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652549 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1776314115.577972, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652585 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652611 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652623 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652635 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652674 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652687 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652699 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 05:00:52.652720 | orchestrator | 2026-04-16 05:00:52.652733 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-04-16 05:00:52.652746 | orchestrator | Thursday 16 April 2026 05:00:47 +0000 (0:00:00.910) 0:03:39.001 ******** 2026-04-16 05:00:52.652757 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:52.652769 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:52.652780 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:52.652790 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:52.652802 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:52.652813 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:52.652823 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:52.652834 | orchestrator | 2026-04-16 05:00:52.652845 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-04-16 05:00:52.652856 | orchestrator | Thursday 16 April 2026 05:00:49 +0000 (0:00:01.052) 0:03:40.053 ******** 2026-04-16 05:00:52.652866 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:52.652877 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:52.652888 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:52.652898 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:52.652909 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:52.652919 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:52.652930 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:52.652941 | orchestrator | 2026-04-16 05:00:52.652958 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-04-16 05:00:52.653001 | orchestrator | Thursday 16 April 2026 05:00:50 +0000 (0:00:01.096) 0:03:41.150 ******** 2026-04-16 05:00:52.653015 | orchestrator | changed: [testbed-manager] 2026-04-16 05:00:52.653035 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:00:52.653054 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:00:52.653072 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:00:52.653091 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:00:52.653109 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:00:52.653127 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:00:52.653144 | orchestrator | 2026-04-16 05:00:52.653163 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-04-16 05:00:52.653182 | orchestrator | Thursday 16 April 2026 05:00:51 +0000 (0:00:01.034) 0:03:42.185 ******** 2026-04-16 05:00:52.653201 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:00:52.653218 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:00:52.653236 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:00:52.653256 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:00:52.653279 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:00:52.653297 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:00:52.653316 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:00:52.653335 | orchestrator | 2026-04-16 05:00:52.653353 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-04-16 05:00:52.653372 | orchestrator | Thursday 16 April 2026 05:00:51 +0000 (0:00:00.264) 0:03:42.449 ******** 2026-04-16 05:00:52.653390 | orchestrator | ok: [testbed-manager] 2026-04-16 05:00:52.653409 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:00:52.653426 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:00:52.653445 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:00:52.653462 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:00:52.653480 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:00:52.653497 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:00:52.653515 | orchestrator | 2026-04-16 05:00:52.653534 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-04-16 05:00:52.653553 | orchestrator | Thursday 16 April 2026 05:00:52 +0000 (0:00:00.851) 0:03:43.300 ******** 2026-04-16 05:00:52.653571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:00:52.653623 | orchestrator | 2026-04-16 05:00:52.653642 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-04-16 05:00:52.653678 | orchestrator | Thursday 16 April 2026 05:00:52 +0000 (0:00:00.354) 0:03:43.655 ******** 2026-04-16 05:02:11.909130 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909231 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:11.909243 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:11.909253 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:11.909261 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:11.909270 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:11.909278 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:11.909287 | orchestrator | 2026-04-16 05:02:11.909296 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-04-16 05:02:11.909305 | orchestrator | Thursday 16 April 2026 05:01:01 +0000 (0:00:08.515) 0:03:52.170 ******** 2026-04-16 05:02:11.909314 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909322 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.909330 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.909338 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.909346 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.909354 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.909362 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.909369 | orchestrator | 2026-04-16 05:02:11.909378 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-04-16 05:02:11.909386 | orchestrator | Thursday 16 April 2026 05:01:02 +0000 (0:00:01.194) 0:03:53.364 ******** 2026-04-16 05:02:11.909394 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909402 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.909410 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.909418 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.909426 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.909433 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.909441 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.909449 | orchestrator | 2026-04-16 05:02:11.909457 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-04-16 05:02:11.909465 | orchestrator | Thursday 16 April 2026 05:01:03 +0000 (0:00:01.024) 0:03:54.389 ******** 2026-04-16 05:02:11.909473 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909481 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.909489 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.909497 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.909506 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.909514 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.909522 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.909530 | orchestrator | 2026-04-16 05:02:11.909538 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-04-16 05:02:11.909547 | orchestrator | Thursday 16 April 2026 05:01:03 +0000 (0:00:00.278) 0:03:54.668 ******** 2026-04-16 05:02:11.909555 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909563 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.909571 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.909579 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.909587 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.909595 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.909603 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.909611 | orchestrator | 2026-04-16 05:02:11.909619 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-04-16 05:02:11.909627 | orchestrator | Thursday 16 April 2026 05:01:03 +0000 (0:00:00.294) 0:03:54.962 ******** 2026-04-16 05:02:11.909635 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909643 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.909651 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.909681 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.909689 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.909697 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.909705 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.909713 | orchestrator | 2026-04-16 05:02:11.909721 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-04-16 05:02:11.909729 | orchestrator | Thursday 16 April 2026 05:01:04 +0000 (0:00:00.275) 0:03:55.238 ******** 2026-04-16 05:02:11.909737 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.909745 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.909753 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.909761 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.909769 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.909777 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.909785 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.909793 | orchestrator | 2026-04-16 05:02:11.909801 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-04-16 05:02:11.909809 | orchestrator | Thursday 16 April 2026 05:01:09 +0000 (0:00:05.723) 0:04:00.962 ******** 2026-04-16 05:02:11.909830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:02:11.909841 | orchestrator | 2026-04-16 05:02:11.909849 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-04-16 05:02:11.909867 | orchestrator | Thursday 16 April 2026 05:01:10 +0000 (0:00:00.375) 0:04:01.338 ******** 2026-04-16 05:02:11.909875 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.909883 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-04-16 05:02:11.909892 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.909900 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-04-16 05:02:11.909908 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:11.909930 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.909939 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:11.909947 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-04-16 05:02:11.909955 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.909963 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:11.909971 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-04-16 05:02:11.909979 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.909987 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-04-16 05:02:11.909996 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:11.910004 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.910012 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-04-16 05:02:11.910089 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:11.910099 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:11.910107 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-04-16 05:02:11.910115 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-04-16 05:02:11.910123 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:11.910131 | orchestrator | 2026-04-16 05:02:11.910139 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-04-16 05:02:11.910147 | orchestrator | Thursday 16 April 2026 05:01:10 +0000 (0:00:00.309) 0:04:01.647 ******** 2026-04-16 05:02:11.910156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:02:11.910164 | orchestrator | 2026-04-16 05:02:11.910172 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-04-16 05:02:11.910189 | orchestrator | Thursday 16 April 2026 05:01:11 +0000 (0:00:00.365) 0:04:02.013 ******** 2026-04-16 05:02:11.910197 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-04-16 05:02:11.910206 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-04-16 05:02:11.910214 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:11.910222 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-04-16 05:02:11.910230 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:11.910238 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-04-16 05:02:11.910246 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:11.910254 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-04-16 05:02:11.910262 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:11.910270 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-04-16 05:02:11.910278 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:11.910286 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:11.910294 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-04-16 05:02:11.910302 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:11.910310 | orchestrator | 2026-04-16 05:02:11.910319 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-04-16 05:02:11.910327 | orchestrator | Thursday 16 April 2026 05:01:11 +0000 (0:00:00.286) 0:04:02.299 ******** 2026-04-16 05:02:11.910335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:02:11.910343 | orchestrator | 2026-04-16 05:02:11.910351 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-04-16 05:02:11.910359 | orchestrator | Thursday 16 April 2026 05:01:11 +0000 (0:00:00.362) 0:04:02.662 ******** 2026-04-16 05:02:11.910367 | orchestrator | changed: [testbed-manager] 2026-04-16 05:02:11.910375 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:11.910383 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:11.910396 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:11.910404 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:11.910412 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:11.910421 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:11.910429 | orchestrator | 2026-04-16 05:02:11.910437 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-04-16 05:02:11.910445 | orchestrator | Thursday 16 April 2026 05:01:46 +0000 (0:00:35.305) 0:04:37.967 ******** 2026-04-16 05:02:11.910453 | orchestrator | changed: [testbed-manager] 2026-04-16 05:02:11.910461 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:11.910469 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:11.910477 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:11.910485 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:11.910493 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:11.910501 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:11.910509 | orchestrator | 2026-04-16 05:02:11.910517 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-04-16 05:02:11.910526 | orchestrator | Thursday 16 April 2026 05:01:55 +0000 (0:00:08.598) 0:04:46.566 ******** 2026-04-16 05:02:11.910534 | orchestrator | changed: [testbed-manager] 2026-04-16 05:02:11.910542 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:11.910550 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:11.910558 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:11.910566 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:11.910574 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:11.910582 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:11.910590 | orchestrator | 2026-04-16 05:02:11.910598 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-04-16 05:02:11.910613 | orchestrator | Thursday 16 April 2026 05:02:03 +0000 (0:00:08.119) 0:04:54.685 ******** 2026-04-16 05:02:11.910621 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:11.910629 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:11.910637 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:11.910645 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:11.910653 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:11.910661 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:11.910669 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:11.910677 | orchestrator | 2026-04-16 05:02:11.910685 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-04-16 05:02:11.910693 | orchestrator | Thursday 16 April 2026 05:02:05 +0000 (0:00:01.762) 0:04:56.448 ******** 2026-04-16 05:02:11.910702 | orchestrator | changed: [testbed-manager] 2026-04-16 05:02:11.910710 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:11.910718 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:11.910726 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:11.910734 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:11.910742 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:11.910749 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:11.910758 | orchestrator | 2026-04-16 05:02:11.910771 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-04-16 05:02:22.362906 | orchestrator | Thursday 16 April 2026 05:02:11 +0000 (0:00:06.453) 0:05:02.901 ******** 2026-04-16 05:02:22.362993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:02:22.363004 | orchestrator | 2026-04-16 05:02:22.363012 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-04-16 05:02:22.363019 | orchestrator | Thursday 16 April 2026 05:02:12 +0000 (0:00:00.392) 0:05:03.294 ******** 2026-04-16 05:02:22.363025 | orchestrator | changed: [testbed-manager] 2026-04-16 05:02:22.363033 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:22.363078 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:22.363085 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:22.363091 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:22.363098 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:22.363104 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:22.363110 | orchestrator | 2026-04-16 05:02:22.363117 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-04-16 05:02:22.363124 | orchestrator | Thursday 16 April 2026 05:02:13 +0000 (0:00:00.723) 0:05:04.017 ******** 2026-04-16 05:02:22.363130 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:22.363138 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:22.363144 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:22.363150 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:22.363157 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:22.363163 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:22.363169 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:22.363176 | orchestrator | 2026-04-16 05:02:22.363182 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-04-16 05:02:22.363188 | orchestrator | Thursday 16 April 2026 05:02:14 +0000 (0:00:01.720) 0:05:05.738 ******** 2026-04-16 05:02:22.363195 | orchestrator | changed: [testbed-manager] 2026-04-16 05:02:22.363201 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:02:22.363208 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:02:22.363214 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:02:22.363220 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:02:22.363227 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:02:22.363234 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:02:22.363240 | orchestrator | 2026-04-16 05:02:22.363246 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-04-16 05:02:22.363253 | orchestrator | Thursday 16 April 2026 05:02:15 +0000 (0:00:00.724) 0:05:06.463 ******** 2026-04-16 05:02:22.363272 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:22.363279 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:22.363285 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:22.363291 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:22.363297 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:22.363303 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:22.363310 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:22.363316 | orchestrator | 2026-04-16 05:02:22.363322 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-04-16 05:02:22.363329 | orchestrator | Thursday 16 April 2026 05:02:15 +0000 (0:00:00.240) 0:05:06.703 ******** 2026-04-16 05:02:22.363335 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:22.363341 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:22.363353 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:22.363363 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:22.363373 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:22.363383 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:22.363394 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:22.363404 | orchestrator | 2026-04-16 05:02:22.363415 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-04-16 05:02:22.363425 | orchestrator | Thursday 16 April 2026 05:02:16 +0000 (0:00:00.340) 0:05:07.044 ******** 2026-04-16 05:02:22.363435 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:22.363445 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:22.363456 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:22.363467 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:22.363477 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:22.363487 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:22.363498 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:22.363510 | orchestrator | 2026-04-16 05:02:22.363521 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-04-16 05:02:22.363533 | orchestrator | Thursday 16 April 2026 05:02:16 +0000 (0:00:00.270) 0:05:07.314 ******** 2026-04-16 05:02:22.363544 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:22.363552 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:22.363559 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:22.363566 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:22.363573 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:22.363580 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:22.363587 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:22.363594 | orchestrator | 2026-04-16 05:02:22.363601 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-04-16 05:02:22.363608 | orchestrator | Thursday 16 April 2026 05:02:16 +0000 (0:00:00.232) 0:05:07.547 ******** 2026-04-16 05:02:22.363616 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:22.363623 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:22.363630 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:22.363638 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:22.363645 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:22.363652 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:22.363659 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:22.363666 | orchestrator | 2026-04-16 05:02:22.363674 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-04-16 05:02:22.363681 | orchestrator | Thursday 16 April 2026 05:02:16 +0000 (0:00:00.270) 0:05:07.818 ******** 2026-04-16 05:02:22.363689 | orchestrator | ok: [testbed-manager] =>  2026-04-16 05:02:22.363696 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363702 | orchestrator | ok: [testbed-node-3] =>  2026-04-16 05:02:22.363708 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363715 | orchestrator | ok: [testbed-node-4] =>  2026-04-16 05:02:22.363721 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363727 | orchestrator | ok: [testbed-node-5] =>  2026-04-16 05:02:22.363733 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363758 | orchestrator | ok: [testbed-node-0] =>  2026-04-16 05:02:22.363765 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363771 | orchestrator | ok: [testbed-node-1] =>  2026-04-16 05:02:22.363777 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363783 | orchestrator | ok: [testbed-node-2] =>  2026-04-16 05:02:22.363789 | orchestrator |  docker_version: 5:27.5.1 2026-04-16 05:02:22.363795 | orchestrator | 2026-04-16 05:02:22.363802 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-04-16 05:02:22.363808 | orchestrator | Thursday 16 April 2026 05:02:17 +0000 (0:00:00.248) 0:05:08.066 ******** 2026-04-16 05:02:22.363814 | orchestrator | ok: [testbed-manager] =>  2026-04-16 05:02:22.363820 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363826 | orchestrator | ok: [testbed-node-3] =>  2026-04-16 05:02:22.363832 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363839 | orchestrator | ok: [testbed-node-4] =>  2026-04-16 05:02:22.363845 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363851 | orchestrator | ok: [testbed-node-5] =>  2026-04-16 05:02:22.363857 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363863 | orchestrator | ok: [testbed-node-0] =>  2026-04-16 05:02:22.363870 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363876 | orchestrator | ok: [testbed-node-1] =>  2026-04-16 05:02:22.363882 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363888 | orchestrator | ok: [testbed-node-2] =>  2026-04-16 05:02:22.363894 | orchestrator |  docker_cli_version: 5:27.5.1 2026-04-16 05:02:22.363901 | orchestrator | 2026-04-16 05:02:22.363907 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-04-16 05:02:22.363913 | orchestrator | Thursday 16 April 2026 05:02:17 +0000 (0:00:00.288) 0:05:08.354 ******** 2026-04-16 05:02:22.363920 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:22.363926 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:22.363932 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:22.363938 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:22.363944 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:22.363950 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:22.363956 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:22.363963 | orchestrator | 2026-04-16 05:02:22.363969 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-04-16 05:02:22.363975 | orchestrator | Thursday 16 April 2026 05:02:17 +0000 (0:00:00.241) 0:05:08.596 ******** 2026-04-16 05:02:22.363981 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:22.363988 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:22.363994 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:22.364000 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:02:22.364006 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:02:22.364012 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:02:22.364018 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:02:22.364024 | orchestrator | 2026-04-16 05:02:22.364031 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-04-16 05:02:22.364054 | orchestrator | Thursday 16 April 2026 05:02:17 +0000 (0:00:00.248) 0:05:08.845 ******** 2026-04-16 05:02:22.364062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:02:22.364070 | orchestrator | 2026-04-16 05:02:22.364080 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-04-16 05:02:22.364087 | orchestrator | Thursday 16 April 2026 05:02:18 +0000 (0:00:00.398) 0:05:09.244 ******** 2026-04-16 05:02:22.364093 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:22.364099 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:22.364106 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:22.364112 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:22.364118 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:22.364128 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:22.364134 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:22.364140 | orchestrator | 2026-04-16 05:02:22.364147 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-04-16 05:02:22.364153 | orchestrator | Thursday 16 April 2026 05:02:19 +0000 (0:00:00.919) 0:05:10.163 ******** 2026-04-16 05:02:22.364159 | orchestrator | ok: [testbed-manager] 2026-04-16 05:02:22.364166 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:02:22.364172 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:02:22.364178 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:02:22.364184 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:02:22.364191 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:02:22.364197 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:02:22.364203 | orchestrator | 2026-04-16 05:02:22.364209 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-04-16 05:02:22.364216 | orchestrator | Thursday 16 April 2026 05:02:21 +0000 (0:00:02.756) 0:05:12.920 ******** 2026-04-16 05:02:22.364223 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-04-16 05:02:22.364229 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-04-16 05:02:22.364236 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-04-16 05:02:22.364242 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-04-16 05:02:22.364249 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-04-16 05:02:22.364255 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-04-16 05:02:22.364261 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:02:22.364267 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-04-16 05:02:22.364274 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-04-16 05:02:22.364280 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-04-16 05:02:22.364286 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:02:22.364292 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-04-16 05:02:22.364298 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-04-16 05:02:22.364305 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-04-16 05:02:22.364311 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:02:22.364317 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-04-16 05:02:22.364327 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-04-16 05:03:23.117333 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-04-16 05:03:23.117451 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:23.117468 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-04-16 05:03:23.117480 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-04-16 05:03:23.117492 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:23.117503 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-04-16 05:03:23.117514 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:23.117525 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-04-16 05:03:23.117536 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-04-16 05:03:23.117548 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-04-16 05:03:23.117559 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:23.117570 | orchestrator | 2026-04-16 05:03:23.117583 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-04-16 05:03:23.117595 | orchestrator | Thursday 16 April 2026 05:02:22 +0000 (0:00:00.638) 0:05:13.559 ******** 2026-04-16 05:03:23.117607 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.117618 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.117629 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.117640 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.117651 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.117663 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.117698 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.117710 | orchestrator | 2026-04-16 05:03:23.117721 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-04-16 05:03:23.117732 | orchestrator | Thursday 16 April 2026 05:02:29 +0000 (0:00:06.827) 0:05:20.386 ******** 2026-04-16 05:03:23.117743 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.117754 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.117765 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.117776 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.117787 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.117836 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.117848 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.117859 | orchestrator | 2026-04-16 05:03:23.117873 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-04-16 05:03:23.117887 | orchestrator | Thursday 16 April 2026 05:02:30 +0000 (0:00:01.034) 0:05:21.420 ******** 2026-04-16 05:03:23.117900 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.117920 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.117939 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.117957 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.117975 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.117994 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.118012 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.118107 | orchestrator | 2026-04-16 05:03:23.118128 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-04-16 05:03:23.118148 | orchestrator | Thursday 16 April 2026 05:02:38 +0000 (0:00:08.581) 0:05:30.002 ******** 2026-04-16 05:03:23.118166 | orchestrator | changed: [testbed-manager] 2026-04-16 05:03:23.118185 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.118204 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.118222 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.118242 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.118260 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.118278 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.118289 | orchestrator | 2026-04-16 05:03:23.118300 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-04-16 05:03:23.118311 | orchestrator | Thursday 16 April 2026 05:02:42 +0000 (0:00:03.279) 0:05:33.282 ******** 2026-04-16 05:03:23.118322 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.118332 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.118343 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.118354 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.118365 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.118375 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.118386 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.118397 | orchestrator | 2026-04-16 05:03:23.118408 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-04-16 05:03:23.118419 | orchestrator | Thursday 16 April 2026 05:02:43 +0000 (0:00:01.288) 0:05:34.570 ******** 2026-04-16 05:03:23.118430 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.118441 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.118451 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.118462 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.118472 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.118489 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.118507 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.118524 | orchestrator | 2026-04-16 05:03:23.118544 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-04-16 05:03:23.118563 | orchestrator | Thursday 16 April 2026 05:02:45 +0000 (0:00:01.465) 0:05:36.035 ******** 2026-04-16 05:03:23.118581 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:23.118600 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:23.118618 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:23.118637 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:23.118673 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:23.118685 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:23.118703 | orchestrator | changed: [testbed-manager] 2026-04-16 05:03:23.118721 | orchestrator | 2026-04-16 05:03:23.118740 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-04-16 05:03:23.118759 | orchestrator | Thursday 16 April 2026 05:02:45 +0000 (0:00:00.585) 0:05:36.620 ******** 2026-04-16 05:03:23.118778 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.118826 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.118838 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.118850 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.118860 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.118871 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.118882 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.118893 | orchestrator | 2026-04-16 05:03:23.118904 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-04-16 05:03:23.118936 | orchestrator | Thursday 16 April 2026 05:02:55 +0000 (0:00:09.890) 0:05:46.511 ******** 2026-04-16 05:03:23.118948 | orchestrator | changed: [testbed-manager] 2026-04-16 05:03:23.118959 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.118970 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.118980 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.118991 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.119002 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.119013 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.119024 | orchestrator | 2026-04-16 05:03:23.119035 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-04-16 05:03:23.119046 | orchestrator | Thursday 16 April 2026 05:02:56 +0000 (0:00:00.926) 0:05:47.438 ******** 2026-04-16 05:03:23.119057 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.119068 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.119079 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.119105 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.119118 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.119152 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.119171 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.119190 | orchestrator | 2026-04-16 05:03:23.119208 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-04-16 05:03:23.119220 | orchestrator | Thursday 16 April 2026 05:03:05 +0000 (0:00:09.143) 0:05:56.581 ******** 2026-04-16 05:03:23.119230 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.119241 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.119252 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.119262 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.119273 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.119284 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.119295 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.119305 | orchestrator | 2026-04-16 05:03:23.119316 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-04-16 05:03:23.119327 | orchestrator | Thursday 16 April 2026 05:03:16 +0000 (0:00:10.856) 0:06:07.437 ******** 2026-04-16 05:03:23.119338 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-04-16 05:03:23.119349 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-04-16 05:03:23.119360 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-04-16 05:03:23.119371 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-04-16 05:03:23.119381 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-04-16 05:03:23.119392 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-04-16 05:03:23.119403 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-04-16 05:03:23.119414 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-04-16 05:03:23.119424 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-04-16 05:03:23.119444 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-04-16 05:03:23.119455 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-04-16 05:03:23.119513 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-04-16 05:03:23.119525 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-04-16 05:03:23.119537 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-04-16 05:03:23.119548 | orchestrator | 2026-04-16 05:03:23.119559 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-04-16 05:03:23.119575 | orchestrator | Thursday 16 April 2026 05:03:17 +0000 (0:00:01.167) 0:06:08.605 ******** 2026-04-16 05:03:23.119587 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:23.119597 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:23.119608 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:23.119619 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:23.119630 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:23.119640 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:23.119651 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:23.119662 | orchestrator | 2026-04-16 05:03:23.119673 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-04-16 05:03:23.119684 | orchestrator | Thursday 16 April 2026 05:03:18 +0000 (0:00:00.555) 0:06:09.161 ******** 2026-04-16 05:03:23.119695 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:23.119706 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:23.119717 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:23.119727 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:23.119738 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:23.119749 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:23.119760 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:23.119771 | orchestrator | 2026-04-16 05:03:23.119782 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-04-16 05:03:23.119867 | orchestrator | Thursday 16 April 2026 05:03:22 +0000 (0:00:03.959) 0:06:13.120 ******** 2026-04-16 05:03:23.119888 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:23.119902 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:23.119913 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:23.119924 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:23.119934 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:23.119945 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:23.119956 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:23.119967 | orchestrator | 2026-04-16 05:03:23.119979 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-04-16 05:03:23.119990 | orchestrator | Thursday 16 April 2026 05:03:22 +0000 (0:00:00.548) 0:06:13.669 ******** 2026-04-16 05:03:23.120001 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-04-16 05:03:23.120012 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-04-16 05:03:23.120023 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:23.120034 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-04-16 05:03:23.120045 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-04-16 05:03:23.120056 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:23.120067 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-04-16 05:03:23.120078 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-04-16 05:03:23.120089 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:23.120111 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-04-16 05:03:41.863539 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-04-16 05:03:41.863685 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:41.863712 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-04-16 05:03:41.863810 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-04-16 05:03:41.863831 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:41.863885 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-04-16 05:03:41.863906 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-04-16 05:03:41.863927 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:41.863946 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-04-16 05:03:41.863964 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-04-16 05:03:41.863984 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:41.864003 | orchestrator | 2026-04-16 05:03:41.864024 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-04-16 05:03:41.864045 | orchestrator | Thursday 16 April 2026 05:03:23 +0000 (0:00:00.683) 0:06:14.352 ******** 2026-04-16 05:03:41.864066 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:41.864086 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:41.864105 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:41.864123 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:41.864142 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:41.864161 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:41.864180 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:41.864199 | orchestrator | 2026-04-16 05:03:41.864218 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-04-16 05:03:41.864237 | orchestrator | Thursday 16 April 2026 05:03:23 +0000 (0:00:00.459) 0:06:14.812 ******** 2026-04-16 05:03:41.864256 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:41.864273 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:41.864293 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:41.864311 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:41.864330 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:41.864349 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:41.864367 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:41.864387 | orchestrator | 2026-04-16 05:03:41.864406 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-04-16 05:03:41.864424 | orchestrator | Thursday 16 April 2026 05:03:24 +0000 (0:00:00.459) 0:06:15.271 ******** 2026-04-16 05:03:41.864442 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:41.864461 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:03:41.864480 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:03:41.864498 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:03:41.864518 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:03:41.864536 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:03:41.864555 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:03:41.864574 | orchestrator | 2026-04-16 05:03:41.864592 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-04-16 05:03:41.864610 | orchestrator | Thursday 16 April 2026 05:03:24 +0000 (0:00:00.510) 0:06:15.782 ******** 2026-04-16 05:03:41.864630 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.864648 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:03:41.864667 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:03:41.864686 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:03:41.864704 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:03:41.864745 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:03:41.864766 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:03:41.864785 | orchestrator | 2026-04-16 05:03:41.864803 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-04-16 05:03:41.864821 | orchestrator | Thursday 16 April 2026 05:03:26 +0000 (0:00:01.833) 0:06:17.616 ******** 2026-04-16 05:03:41.864841 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:03:41.864862 | orchestrator | 2026-04-16 05:03:41.864880 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-04-16 05:03:41.864899 | orchestrator | Thursday 16 April 2026 05:03:27 +0000 (0:00:00.796) 0:06:18.412 ******** 2026-04-16 05:03:41.864938 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.864957 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:41.864976 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:41.864994 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:41.865011 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:41.865028 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:41.865046 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:41.865063 | orchestrator | 2026-04-16 05:03:41.865081 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-04-16 05:03:41.865100 | orchestrator | Thursday 16 April 2026 05:03:28 +0000 (0:00:00.808) 0:06:19.221 ******** 2026-04-16 05:03:41.865118 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.865136 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:41.865154 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:41.865171 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:41.865190 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:41.865210 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:41.865229 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:41.865246 | orchestrator | 2026-04-16 05:03:41.865266 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-04-16 05:03:41.865284 | orchestrator | Thursday 16 April 2026 05:03:29 +0000 (0:00:00.856) 0:06:20.077 ******** 2026-04-16 05:03:41.865302 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.865320 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:41.865338 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:41.865358 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:41.865376 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:41.865394 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:41.865405 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:41.865416 | orchestrator | 2026-04-16 05:03:41.865427 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-04-16 05:03:41.865462 | orchestrator | Thursday 16 April 2026 05:03:30 +0000 (0:00:01.606) 0:06:21.684 ******** 2026-04-16 05:03:41.865474 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:03:41.865485 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:03:41.865496 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:03:41.865507 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:03:41.865518 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:03:41.865529 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:03:41.865540 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:03:41.865550 | orchestrator | 2026-04-16 05:03:41.865561 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-04-16 05:03:41.865572 | orchestrator | Thursday 16 April 2026 05:03:32 +0000 (0:00:01.347) 0:06:23.032 ******** 2026-04-16 05:03:41.865583 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.865594 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:41.865605 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:41.865616 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:41.865627 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:41.865637 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:41.865648 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:41.865659 | orchestrator | 2026-04-16 05:03:41.865670 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-04-16 05:03:41.865681 | orchestrator | Thursday 16 April 2026 05:03:33 +0000 (0:00:01.327) 0:06:24.360 ******** 2026-04-16 05:03:41.865692 | orchestrator | changed: [testbed-manager] 2026-04-16 05:03:41.865702 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:03:41.865713 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:03:41.865752 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:03:41.865772 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:03:41.865790 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:03:41.865808 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:03:41.865826 | orchestrator | 2026-04-16 05:03:41.865851 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-04-16 05:03:41.865862 | orchestrator | Thursday 16 April 2026 05:03:34 +0000 (0:00:01.380) 0:06:25.740 ******** 2026-04-16 05:03:41.865873 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:03:41.865884 | orchestrator | 2026-04-16 05:03:41.865896 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-04-16 05:03:41.865906 | orchestrator | Thursday 16 April 2026 05:03:35 +0000 (0:00:01.078) 0:06:26.819 ******** 2026-04-16 05:03:41.865916 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.865925 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:03:41.865935 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:03:41.865944 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:03:41.865954 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:03:41.865963 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:03:41.865973 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:03:41.865982 | orchestrator | 2026-04-16 05:03:41.865992 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-04-16 05:03:41.866002 | orchestrator | Thursday 16 April 2026 05:03:37 +0000 (0:00:01.371) 0:06:28.190 ******** 2026-04-16 05:03:41.866012 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.866085 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:03:41.866095 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:03:41.866105 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:03:41.866160 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:03:41.866179 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:03:41.866191 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:03:41.866200 | orchestrator | 2026-04-16 05:03:41.866210 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-04-16 05:03:41.866220 | orchestrator | Thursday 16 April 2026 05:03:38 +0000 (0:00:01.070) 0:06:29.261 ******** 2026-04-16 05:03:41.866229 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.866239 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:03:41.866249 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:03:41.866258 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:03:41.866267 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:03:41.866277 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:03:41.866286 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:03:41.866295 | orchestrator | 2026-04-16 05:03:41.866305 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-04-16 05:03:41.866315 | orchestrator | Thursday 16 April 2026 05:03:39 +0000 (0:00:01.130) 0:06:30.392 ******** 2026-04-16 05:03:41.866324 | orchestrator | ok: [testbed-manager] 2026-04-16 05:03:41.866334 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:03:41.866343 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:03:41.866352 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:03:41.866362 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:03:41.866371 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:03:41.866380 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:03:41.866390 | orchestrator | 2026-04-16 05:03:41.866399 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-04-16 05:03:41.866409 | orchestrator | Thursday 16 April 2026 05:03:40 +0000 (0:00:01.283) 0:06:31.675 ******** 2026-04-16 05:03:41.866419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:03:41.866429 | orchestrator | 2026-04-16 05:03:41.866439 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:03:41.866448 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.887) 0:06:32.563 ******** 2026-04-16 05:03:41.866457 | orchestrator | 2026-04-16 05:03:41.866467 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:03:41.866484 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.039) 0:06:32.602 ******** 2026-04-16 05:03:41.866494 | orchestrator | 2026-04-16 05:03:41.866503 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:03:41.866513 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.048) 0:06:32.651 ******** 2026-04-16 05:03:41.866522 | orchestrator | 2026-04-16 05:03:41.866532 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:03:41.866551 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.039) 0:06:32.690 ******** 2026-04-16 05:04:07.151245 | orchestrator | 2026-04-16 05:04:07.151365 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:04:07.151382 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.039) 0:06:32.730 ******** 2026-04-16 05:04:07.151394 | orchestrator | 2026-04-16 05:04:07.151406 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:04:07.151419 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.044) 0:06:32.774 ******** 2026-04-16 05:04:07.151431 | orchestrator | 2026-04-16 05:04:07.151443 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-04-16 05:04:07.151455 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.039) 0:06:32.813 ******** 2026-04-16 05:04:07.151466 | orchestrator | 2026-04-16 05:04:07.151479 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-04-16 05:04:07.151491 | orchestrator | Thursday 16 April 2026 05:03:41 +0000 (0:00:00.038) 0:06:32.852 ******** 2026-04-16 05:04:07.151503 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:07.151516 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:07.151528 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:07.151540 | orchestrator | 2026-04-16 05:04:07.151551 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-04-16 05:04:07.151563 | orchestrator | Thursday 16 April 2026 05:03:42 +0000 (0:00:01.141) 0:06:33.993 ******** 2026-04-16 05:04:07.151575 | orchestrator | changed: [testbed-manager] 2026-04-16 05:04:07.151588 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:07.151600 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:07.151611 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:07.151623 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:07.151673 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:07.151686 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:07.151697 | orchestrator | 2026-04-16 05:04:07.151709 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-04-16 05:04:07.151720 | orchestrator | Thursday 16 April 2026 05:03:44 +0000 (0:00:01.470) 0:06:35.464 ******** 2026-04-16 05:04:07.151731 | orchestrator | changed: [testbed-manager] 2026-04-16 05:04:07.151743 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:07.151754 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:07.151765 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:07.151776 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:07.151787 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:07.151798 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:07.151809 | orchestrator | 2026-04-16 05:04:07.151821 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-04-16 05:04:07.151832 | orchestrator | Thursday 16 April 2026 05:03:45 +0000 (0:00:01.143) 0:06:36.607 ******** 2026-04-16 05:04:07.151843 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:07.151854 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:07.151865 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:07.151876 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:07.151887 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:07.151898 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:07.151909 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:07.151920 | orchestrator | 2026-04-16 05:04:07.151932 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-04-16 05:04:07.151943 | orchestrator | Thursday 16 April 2026 05:03:47 +0000 (0:00:02.248) 0:06:38.855 ******** 2026-04-16 05:04:07.151994 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:07.152006 | orchestrator | 2026-04-16 05:04:07.152018 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-04-16 05:04:07.152029 | orchestrator | Thursday 16 April 2026 05:03:47 +0000 (0:00:00.100) 0:06:38.956 ******** 2026-04-16 05:04:07.152040 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:07.152052 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:07.152062 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:07.152073 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:07.152084 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:07.152101 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:07.152120 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:07.152140 | orchestrator | 2026-04-16 05:04:07.152157 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-04-16 05:04:07.152178 | orchestrator | Thursday 16 April 2026 05:03:48 +0000 (0:00:00.967) 0:06:39.924 ******** 2026-04-16 05:04:07.152197 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:07.152216 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:07.152231 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:07.152242 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:07.152253 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:07.152264 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:07.152275 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:07.152286 | orchestrator | 2026-04-16 05:04:07.152297 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-04-16 05:04:07.152308 | orchestrator | Thursday 16 April 2026 05:03:49 +0000 (0:00:00.566) 0:06:40.490 ******** 2026-04-16 05:04:07.152320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:04:07.152334 | orchestrator | 2026-04-16 05:04:07.152346 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-04-16 05:04:07.152357 | orchestrator | Thursday 16 April 2026 05:03:50 +0000 (0:00:01.022) 0:06:41.513 ******** 2026-04-16 05:04:07.152368 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:07.152379 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:07.152390 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:07.152401 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:07.152412 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:07.152423 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:07.152434 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:07.152445 | orchestrator | 2026-04-16 05:04:07.152457 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-04-16 05:04:07.152468 | orchestrator | Thursday 16 April 2026 05:03:51 +0000 (0:00:00.826) 0:06:42.340 ******** 2026-04-16 05:04:07.152479 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-04-16 05:04:07.152510 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-04-16 05:04:07.152523 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-04-16 05:04:07.152535 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-04-16 05:04:07.152546 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-04-16 05:04:07.152557 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-04-16 05:04:07.152568 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-04-16 05:04:07.152579 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-04-16 05:04:07.152590 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-04-16 05:04:07.152601 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-04-16 05:04:07.152612 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-04-16 05:04:07.152623 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-04-16 05:04:07.152670 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-04-16 05:04:07.152681 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-04-16 05:04:07.152693 | orchestrator | 2026-04-16 05:04:07.152704 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-04-16 05:04:07.152715 | orchestrator | Thursday 16 April 2026 05:03:53 +0000 (0:00:02.448) 0:06:44.788 ******** 2026-04-16 05:04:07.152726 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:07.152738 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:07.152749 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:07.152760 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:07.152771 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:07.152782 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:07.152793 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:07.152804 | orchestrator | 2026-04-16 05:04:07.152815 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-04-16 05:04:07.152827 | orchestrator | Thursday 16 April 2026 05:03:54 +0000 (0:00:00.649) 0:06:45.438 ******** 2026-04-16 05:04:07.152839 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:04:07.152852 | orchestrator | 2026-04-16 05:04:07.152863 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-04-16 05:04:07.152875 | orchestrator | Thursday 16 April 2026 05:03:55 +0000 (0:00:00.824) 0:06:46.262 ******** 2026-04-16 05:04:07.152886 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:07.152897 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:07.152908 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:07.152919 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:07.152930 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:07.152941 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:07.152952 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:07.152963 | orchestrator | 2026-04-16 05:04:07.152974 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-04-16 05:04:07.152991 | orchestrator | Thursday 16 April 2026 05:03:56 +0000 (0:00:00.813) 0:06:47.075 ******** 2026-04-16 05:04:07.153003 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:07.153014 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:07.153025 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:07.153036 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:07.153047 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:07.153057 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:07.153068 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:07.153079 | orchestrator | 2026-04-16 05:04:07.153091 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-04-16 05:04:07.153102 | orchestrator | Thursday 16 April 2026 05:03:57 +0000 (0:00:00.992) 0:06:48.068 ******** 2026-04-16 05:04:07.153113 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:07.153124 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:07.153135 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:07.153147 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:07.153158 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:07.153169 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:07.153180 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:07.153191 | orchestrator | 2026-04-16 05:04:07.153202 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-04-16 05:04:07.153213 | orchestrator | Thursday 16 April 2026 05:03:57 +0000 (0:00:00.532) 0:06:48.600 ******** 2026-04-16 05:04:07.153224 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:07.153235 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:07.153246 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:07.153257 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:07.153268 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:07.153286 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:07.153297 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:07.153308 | orchestrator | 2026-04-16 05:04:07.153319 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-04-16 05:04:07.153331 | orchestrator | Thursday 16 April 2026 05:03:59 +0000 (0:00:01.490) 0:06:50.091 ******** 2026-04-16 05:04:07.153342 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:07.153353 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:07.153364 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:07.153374 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:07.153385 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:07.153396 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:07.153407 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:07.153418 | orchestrator | 2026-04-16 05:04:07.153429 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-04-16 05:04:07.153441 | orchestrator | Thursday 16 April 2026 05:03:59 +0000 (0:00:00.503) 0:06:50.594 ******** 2026-04-16 05:04:07.153452 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:07.153463 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:07.153474 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:07.153485 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:07.153496 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:07.153507 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:07.153525 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:38.800651 | orchestrator | 2026-04-16 05:04:38.800776 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-04-16 05:04:38.800795 | orchestrator | Thursday 16 April 2026 05:04:07 +0000 (0:00:07.548) 0:06:58.143 ******** 2026-04-16 05:04:38.800808 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.800821 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:38.800833 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:38.800844 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:38.800855 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:38.800866 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:38.800877 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:38.800889 | orchestrator | 2026-04-16 05:04:38.800900 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-04-16 05:04:38.800911 | orchestrator | Thursday 16 April 2026 05:04:08 +0000 (0:00:01.567) 0:06:59.710 ******** 2026-04-16 05:04:38.800922 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.800933 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:38.800944 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:38.800955 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:38.800966 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:38.800977 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:38.800988 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:38.800999 | orchestrator | 2026-04-16 05:04:38.801010 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-04-16 05:04:38.801021 | orchestrator | Thursday 16 April 2026 05:04:10 +0000 (0:00:01.707) 0:07:01.417 ******** 2026-04-16 05:04:38.801032 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.801043 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:38.801054 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:38.801065 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:38.801075 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:38.801086 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:38.801097 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:38.801108 | orchestrator | 2026-04-16 05:04:38.801119 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-16 05:04:38.801131 | orchestrator | Thursday 16 April 2026 05:04:12 +0000 (0:00:01.597) 0:07:03.015 ******** 2026-04-16 05:04:38.801144 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.801156 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.801169 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.801206 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.801218 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.801228 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.801239 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.801250 | orchestrator | 2026-04-16 05:04:38.801261 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-16 05:04:38.801272 | orchestrator | Thursday 16 April 2026 05:04:12 +0000 (0:00:00.815) 0:07:03.830 ******** 2026-04-16 05:04:38.801283 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:38.801294 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:38.801306 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:38.801316 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:38.801327 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:38.801338 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:38.801349 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:38.801359 | orchestrator | 2026-04-16 05:04:38.801371 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-04-16 05:04:38.801382 | orchestrator | Thursday 16 April 2026 05:04:13 +0000 (0:00:00.890) 0:07:04.720 ******** 2026-04-16 05:04:38.801393 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:38.801404 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:38.801414 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:38.801425 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:38.801436 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:38.801447 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:38.801458 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:38.801468 | orchestrator | 2026-04-16 05:04:38.801479 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-04-16 05:04:38.801490 | orchestrator | Thursday 16 April 2026 05:04:14 +0000 (0:00:00.486) 0:07:05.207 ******** 2026-04-16 05:04:38.801501 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.801573 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.801588 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.801599 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.801610 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.801621 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.801632 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.801643 | orchestrator | 2026-04-16 05:04:38.801654 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-04-16 05:04:38.801665 | orchestrator | Thursday 16 April 2026 05:04:14 +0000 (0:00:00.488) 0:07:05.696 ******** 2026-04-16 05:04:38.801676 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.801687 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.801698 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.801709 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.801720 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.801730 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.801741 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.801752 | orchestrator | 2026-04-16 05:04:38.801763 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-04-16 05:04:38.801774 | orchestrator | Thursday 16 April 2026 05:04:15 +0000 (0:00:00.628) 0:07:06.325 ******** 2026-04-16 05:04:38.801785 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.801796 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.801806 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.801817 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.801828 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.801838 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.801849 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.801859 | orchestrator | 2026-04-16 05:04:38.801870 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-04-16 05:04:38.801881 | orchestrator | Thursday 16 April 2026 05:04:15 +0000 (0:00:00.500) 0:07:06.825 ******** 2026-04-16 05:04:38.801892 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.801903 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.801923 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.801934 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.801944 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.801955 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.801966 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.801976 | orchestrator | 2026-04-16 05:04:38.802006 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-04-16 05:04:38.802073 | orchestrator | Thursday 16 April 2026 05:04:21 +0000 (0:00:05.484) 0:07:12.310 ******** 2026-04-16 05:04:38.802087 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:04:38.802099 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:04:38.802110 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:04:38.802120 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:04:38.802131 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:04:38.802142 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:04:38.802153 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:04:38.802163 | orchestrator | 2026-04-16 05:04:38.802174 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-04-16 05:04:38.802186 | orchestrator | Thursday 16 April 2026 05:04:21 +0000 (0:00:00.554) 0:07:12.864 ******** 2026-04-16 05:04:38.802199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:04:38.802213 | orchestrator | 2026-04-16 05:04:38.802224 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-04-16 05:04:38.802235 | orchestrator | Thursday 16 April 2026 05:04:22 +0000 (0:00:01.063) 0:07:13.927 ******** 2026-04-16 05:04:38.802246 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.802257 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.802268 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.802279 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.802289 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.802300 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.802310 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.802321 | orchestrator | 2026-04-16 05:04:38.802332 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-04-16 05:04:38.802343 | orchestrator | Thursday 16 April 2026 05:04:24 +0000 (0:00:01.842) 0:07:15.769 ******** 2026-04-16 05:04:38.802354 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.802365 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.802376 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.802386 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.802397 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.802408 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.802419 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.802429 | orchestrator | 2026-04-16 05:04:38.802440 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-04-16 05:04:38.802451 | orchestrator | Thursday 16 April 2026 05:04:25 +0000 (0:00:01.081) 0:07:16.851 ******** 2026-04-16 05:04:38.802462 | orchestrator | ok: [testbed-manager] 2026-04-16 05:04:38.802472 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:04:38.802483 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:04:38.802494 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:04:38.802504 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:04:38.802515 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:04:38.802550 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:04:38.802561 | orchestrator | 2026-04-16 05:04:38.802572 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-04-16 05:04:38.802583 | orchestrator | Thursday 16 April 2026 05:04:26 +0000 (0:00:00.817) 0:07:17.668 ******** 2026-04-16 05:04:38.802600 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802613 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802637 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802648 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802658 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802669 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802680 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-04-16 05:04:38.802690 | orchestrator | 2026-04-16 05:04:38.802701 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-04-16 05:04:38.802712 | orchestrator | Thursday 16 April 2026 05:04:28 +0000 (0:00:01.887) 0:07:19.555 ******** 2026-04-16 05:04:38.802723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:04:38.802734 | orchestrator | 2026-04-16 05:04:38.802745 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-04-16 05:04:38.802756 | orchestrator | Thursday 16 April 2026 05:04:29 +0000 (0:00:00.828) 0:07:20.384 ******** 2026-04-16 05:04:38.802767 | orchestrator | changed: [testbed-manager] 2026-04-16 05:04:38.802778 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:04:38.802789 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:04:38.802799 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:04:38.802810 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:04:38.802821 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:04:38.802831 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:04:38.802842 | orchestrator | 2026-04-16 05:04:38.802861 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-04-16 05:05:08.826580 | orchestrator | Thursday 16 April 2026 05:04:38 +0000 (0:00:09.409) 0:07:29.794 ******** 2026-04-16 05:05:08.826680 | orchestrator | ok: [testbed-manager] 2026-04-16 05:05:08.826697 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:05:08.826721 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:05:08.826733 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:05:08.826744 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:05:08.826754 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:05:08.826765 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:05:08.826776 | orchestrator | 2026-04-16 05:05:08.826788 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-04-16 05:05:08.826800 | orchestrator | Thursday 16 April 2026 05:04:40 +0000 (0:00:01.944) 0:07:31.738 ******** 2026-04-16 05:05:08.826811 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:05:08.826822 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:05:08.826833 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:05:08.826843 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:05:08.826854 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:05:08.826865 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:05:08.826876 | orchestrator | 2026-04-16 05:05:08.826887 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-04-16 05:05:08.826898 | orchestrator | Thursday 16 April 2026 05:04:41 +0000 (0:00:01.270) 0:07:33.009 ******** 2026-04-16 05:05:08.826909 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.826921 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.826932 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.826944 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.826955 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.826986 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.826997 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.827008 | orchestrator | 2026-04-16 05:05:08.827019 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-04-16 05:05:08.827030 | orchestrator | 2026-04-16 05:05:08.827041 | orchestrator | TASK [Include hardening role] ************************************************** 2026-04-16 05:05:08.827052 | orchestrator | Thursday 16 April 2026 05:04:43 +0000 (0:00:01.221) 0:07:34.230 ******** 2026-04-16 05:05:08.827063 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:05:08.827074 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:05:08.827085 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:05:08.827095 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:05:08.827106 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:05:08.827117 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:05:08.827128 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:05:08.827138 | orchestrator | 2026-04-16 05:05:08.827149 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-04-16 05:05:08.827160 | orchestrator | 2026-04-16 05:05:08.827172 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-04-16 05:05:08.827183 | orchestrator | Thursday 16 April 2026 05:04:43 +0000 (0:00:00.744) 0:07:34.975 ******** 2026-04-16 05:05:08.827194 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.827205 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.827215 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.827226 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.827237 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.827247 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.827258 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.827269 | orchestrator | 2026-04-16 05:05:08.827280 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-04-16 05:05:08.827303 | orchestrator | Thursday 16 April 2026 05:04:45 +0000 (0:00:01.283) 0:07:36.258 ******** 2026-04-16 05:05:08.827314 | orchestrator | ok: [testbed-manager] 2026-04-16 05:05:08.827325 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:05:08.827336 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:05:08.827347 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:05:08.827358 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:05:08.827368 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:05:08.827379 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:05:08.827390 | orchestrator | 2026-04-16 05:05:08.827401 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-04-16 05:05:08.827412 | orchestrator | Thursday 16 April 2026 05:04:46 +0000 (0:00:01.402) 0:07:37.661 ******** 2026-04-16 05:05:08.827453 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:05:08.827464 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:05:08.827475 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:05:08.827486 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:05:08.827497 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:05:08.827508 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:05:08.827518 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:05:08.827529 | orchestrator | 2026-04-16 05:05:08.827540 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-04-16 05:05:08.827551 | orchestrator | Thursday 16 April 2026 05:04:47 +0000 (0:00:00.509) 0:07:38.170 ******** 2026-04-16 05:05:08.827563 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:05:08.827575 | orchestrator | 2026-04-16 05:05:08.827586 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-04-16 05:05:08.827597 | orchestrator | Thursday 16 April 2026 05:04:48 +0000 (0:00:00.956) 0:07:39.127 ******** 2026-04-16 05:05:08.827608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:05:08.827629 | orchestrator | 2026-04-16 05:05:08.827640 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-04-16 05:05:08.827651 | orchestrator | Thursday 16 April 2026 05:04:48 +0000 (0:00:00.753) 0:07:39.880 ******** 2026-04-16 05:05:08.827661 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.827672 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.827683 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.827694 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.827705 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.827715 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.827726 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.827737 | orchestrator | 2026-04-16 05:05:08.827762 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-04-16 05:05:08.827774 | orchestrator | Thursday 16 April 2026 05:04:58 +0000 (0:00:09.282) 0:07:49.163 ******** 2026-04-16 05:05:08.827785 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.827796 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.827807 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.827818 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.827829 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.827839 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.827850 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.827861 | orchestrator | 2026-04-16 05:05:08.827872 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-04-16 05:05:08.827883 | orchestrator | Thursday 16 April 2026 05:04:58 +0000 (0:00:00.825) 0:07:49.989 ******** 2026-04-16 05:05:08.827893 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.827904 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.827915 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.827926 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.827936 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.827947 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.827958 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.827968 | orchestrator | 2026-04-16 05:05:08.827979 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-04-16 05:05:08.827990 | orchestrator | Thursday 16 April 2026 05:05:00 +0000 (0:00:01.279) 0:07:51.268 ******** 2026-04-16 05:05:08.828001 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.828012 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.828022 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.828033 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.828044 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.828055 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.828065 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.828076 | orchestrator | 2026-04-16 05:05:08.828087 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-04-16 05:05:08.828098 | orchestrator | Thursday 16 April 2026 05:05:02 +0000 (0:00:01.791) 0:07:53.059 ******** 2026-04-16 05:05:08.828108 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.828119 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.828130 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.828141 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.828152 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.828162 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.828173 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.828184 | orchestrator | 2026-04-16 05:05:08.828195 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-04-16 05:05:08.828206 | orchestrator | Thursday 16 April 2026 05:05:03 +0000 (0:00:01.198) 0:07:54.257 ******** 2026-04-16 05:05:08.828216 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.828227 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.828244 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.828255 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.828265 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.828276 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.828286 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.828297 | orchestrator | 2026-04-16 05:05:08.828308 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-04-16 05:05:08.828319 | orchestrator | 2026-04-16 05:05:08.828335 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-04-16 05:05:08.828346 | orchestrator | Thursday 16 April 2026 05:05:04 +0000 (0:00:01.084) 0:07:55.342 ******** 2026-04-16 05:05:08.828357 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:05:08.828368 | orchestrator | 2026-04-16 05:05:08.828379 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-16 05:05:08.828389 | orchestrator | Thursday 16 April 2026 05:05:05 +0000 (0:00:00.747) 0:07:56.090 ******** 2026-04-16 05:05:08.828400 | orchestrator | ok: [testbed-manager] 2026-04-16 05:05:08.828411 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:05:08.828447 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:05:08.828459 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:05:08.828470 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:05:08.828480 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:05:08.828491 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:05:08.828502 | orchestrator | 2026-04-16 05:05:08.828513 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-16 05:05:08.828524 | orchestrator | Thursday 16 April 2026 05:05:06 +0000 (0:00:00.964) 0:07:57.055 ******** 2026-04-16 05:05:08.828535 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:08.828546 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:08.828557 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:08.828568 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:08.828578 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:08.828589 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:08.828599 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:08.828610 | orchestrator | 2026-04-16 05:05:08.828621 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-04-16 05:05:08.828632 | orchestrator | Thursday 16 April 2026 05:05:07 +0000 (0:00:01.066) 0:07:58.121 ******** 2026-04-16 05:05:08.828643 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:05:08.828654 | orchestrator | 2026-04-16 05:05:08.828664 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-04-16 05:05:08.828675 | orchestrator | Thursday 16 April 2026 05:05:08 +0000 (0:00:00.901) 0:07:59.023 ******** 2026-04-16 05:05:08.828686 | orchestrator | ok: [testbed-manager] 2026-04-16 05:05:08.828697 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:05:08.828707 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:05:08.828718 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:05:08.828729 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:05:08.828739 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:05:08.828750 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:05:08.828760 | orchestrator | 2026-04-16 05:05:08.828779 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-04-16 05:05:10.333276 | orchestrator | Thursday 16 April 2026 05:05:08 +0000 (0:00:00.797) 0:07:59.821 ******** 2026-04-16 05:05:10.333370 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:10.333385 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:10.333397 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:10.333408 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:10.333478 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:10.333490 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:10.333501 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:10.333535 | orchestrator | 2026-04-16 05:05:10.333548 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:05:10.333560 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-16 05:05:10.333572 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-16 05:05:10.333583 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-16 05:05:10.333594 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-16 05:05:10.333605 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-04-16 05:05:10.333616 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-16 05:05:10.333627 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-04-16 05:05:10.333638 | orchestrator | 2026-04-16 05:05:10.333649 | orchestrator | 2026-04-16 05:05:10.333660 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:05:10.333671 | orchestrator | Thursday 16 April 2026 05:05:09 +0000 (0:00:01.014) 0:08:00.835 ******** 2026-04-16 05:05:10.333682 | orchestrator | =============================================================================== 2026-04-16 05:05:10.333693 | orchestrator | osism.commons.packages : Install required packages --------------------- 79.18s 2026-04-16 05:05:10.333704 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.75s 2026-04-16 05:05:10.333714 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.31s 2026-04-16 05:05:10.333725 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.16s 2026-04-16 05:05:10.333749 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.86s 2026-04-16 05:05:10.333760 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.02s 2026-04-16 05:05:10.333771 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.89s 2026-04-16 05:05:10.333782 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required --- 9.58s 2026-04-16 05:05:10.333794 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.41s 2026-04-16 05:05:10.333805 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.28s 2026-04-16 05:05:10.333815 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.14s 2026-04-16 05:05:10.333826 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.60s 2026-04-16 05:05:10.333839 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.58s 2026-04-16 05:05:10.333852 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.52s 2026-04-16 05:05:10.333870 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.12s 2026-04-16 05:05:10.333890 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.55s 2026-04-16 05:05:10.333911 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.83s 2026-04-16 05:05:10.333931 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.45s 2026-04-16 05:05:10.333950 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.79s 2026-04-16 05:05:10.333971 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.72s 2026-04-16 05:05:10.610480 | orchestrator | + osism apply fail2ban 2026-04-16 05:05:22.980247 | orchestrator | 2026-04-16 05:05:22 | INFO  | Task 443b5566-0323-4766-9eec-4385267f2d3c (fail2ban) was prepared for execution. 2026-04-16 05:05:22.980363 | orchestrator | 2026-04-16 05:05:22 | INFO  | It takes a moment until task 443b5566-0323-4766-9eec-4385267f2d3c (fail2ban) has been started and output is visible here. 2026-04-16 05:05:44.639115 | orchestrator | 2026-04-16 05:05:44.639235 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-04-16 05:05:44.639253 | orchestrator | 2026-04-16 05:05:44.639265 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-04-16 05:05:44.639278 | orchestrator | Thursday 16 April 2026 05:05:27 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-04-16 05:05:44.639290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:05:44.639356 | orchestrator | 2026-04-16 05:05:44.639370 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-04-16 05:05:44.639381 | orchestrator | Thursday 16 April 2026 05:05:28 +0000 (0:00:01.089) 0:00:01.347 ******** 2026-04-16 05:05:44.639393 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:44.639405 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:44.639416 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:44.639440 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:44.639463 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:44.639474 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:44.639485 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:44.639497 | orchestrator | 2026-04-16 05:05:44.639508 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-04-16 05:05:44.639519 | orchestrator | Thursday 16 April 2026 05:05:39 +0000 (0:00:11.597) 0:00:12.945 ******** 2026-04-16 05:05:44.639530 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:44.639541 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:44.639551 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:44.639562 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:44.639573 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:44.639584 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:44.639595 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:44.639606 | orchestrator | 2026-04-16 05:05:44.639617 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-04-16 05:05:44.639628 | orchestrator | Thursday 16 April 2026 05:05:41 +0000 (0:00:01.478) 0:00:14.423 ******** 2026-04-16 05:05:44.639639 | orchestrator | ok: [testbed-manager] 2026-04-16 05:05:44.639650 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:05:44.639663 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:05:44.639676 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:05:44.639688 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:05:44.639700 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:05:44.639712 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:05:44.639724 | orchestrator | 2026-04-16 05:05:44.639737 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-04-16 05:05:44.639750 | orchestrator | Thursday 16 April 2026 05:05:42 +0000 (0:00:01.432) 0:00:15.856 ******** 2026-04-16 05:05:44.639762 | orchestrator | changed: [testbed-manager] 2026-04-16 05:05:44.639775 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:05:44.639787 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:05:44.639800 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:05:44.639812 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:05:44.639824 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:05:44.639836 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:05:44.639848 | orchestrator | 2026-04-16 05:05:44.639860 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:05:44.639873 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639914 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639928 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639940 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639952 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639964 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639976 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:05:44.639989 | orchestrator | 2026-04-16 05:05:44.640001 | orchestrator | 2026-04-16 05:05:44.640014 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:05:44.640026 | orchestrator | Thursday 16 April 2026 05:05:44 +0000 (0:00:01.529) 0:00:17.385 ******** 2026-04-16 05:05:44.640037 | orchestrator | =============================================================================== 2026-04-16 05:05:44.640047 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.60s 2026-04-16 05:05:44.640058 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.53s 2026-04-16 05:05:44.640069 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.48s 2026-04-16 05:05:44.640079 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.43s 2026-04-16 05:05:44.640090 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-04-16 05:05:44.881088 | orchestrator | + osism apply network 2026-04-16 05:05:57.040157 | orchestrator | 2026-04-16 05:05:57 | INFO  | Task 9554692d-3df0-4446-846b-f708a2cc5301 (network) was prepared for execution. 2026-04-16 05:05:57.040337 | orchestrator | 2026-04-16 05:05:57 | INFO  | It takes a moment until task 9554692d-3df0-4446-846b-f708a2cc5301 (network) has been started and output is visible here. 2026-04-16 05:06:22.886511 | orchestrator | 2026-04-16 05:06:22.886636 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-04-16 05:06:22.886661 | orchestrator | 2026-04-16 05:06:22.886680 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-04-16 05:06:22.886700 | orchestrator | Thursday 16 April 2026 05:06:01 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-04-16 05:06:22.886719 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.886739 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.886759 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.886777 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.886797 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.886816 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.886835 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.886851 | orchestrator | 2026-04-16 05:06:22.886862 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-04-16 05:06:22.886873 | orchestrator | Thursday 16 April 2026 05:06:01 +0000 (0:00:00.550) 0:00:00.797 ******** 2026-04-16 05:06:22.886886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:06:22.886899 | orchestrator | 2026-04-16 05:06:22.886911 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-04-16 05:06:22.886922 | orchestrator | Thursday 16 April 2026 05:06:02 +0000 (0:00:00.867) 0:00:01.664 ******** 2026-04-16 05:06:22.886957 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.886969 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.886979 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.886990 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.887001 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.887011 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.887022 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.887033 | orchestrator | 2026-04-16 05:06:22.887046 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-04-16 05:06:22.887059 | orchestrator | Thursday 16 April 2026 05:06:04 +0000 (0:00:02.012) 0:00:03.677 ******** 2026-04-16 05:06:22.887073 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.887085 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.887098 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.887110 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.887123 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.887136 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.887148 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.887161 | orchestrator | 2026-04-16 05:06:22.887174 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-04-16 05:06:22.887186 | orchestrator | Thursday 16 April 2026 05:06:06 +0000 (0:00:01.775) 0:00:05.453 ******** 2026-04-16 05:06:22.887262 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-04-16 05:06:22.887276 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-04-16 05:06:22.887289 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-04-16 05:06:22.887302 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-04-16 05:06:22.887315 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-04-16 05:06:22.887327 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-04-16 05:06:22.887340 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-04-16 05:06:22.887351 | orchestrator | 2026-04-16 05:06:22.887380 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-04-16 05:06:22.887397 | orchestrator | Thursday 16 April 2026 05:06:07 +0000 (0:00:00.808) 0:00:06.262 ******** 2026-04-16 05:06:22.887409 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 05:06:22.887421 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 05:06:22.887432 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 05:06:22.887443 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 05:06:22.887453 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:06:22.887464 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 05:06:22.887475 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 05:06:22.887486 | orchestrator | 2026-04-16 05:06:22.887497 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-04-16 05:06:22.887508 | orchestrator | Thursday 16 April 2026 05:06:09 +0000 (0:00:02.724) 0:00:08.986 ******** 2026-04-16 05:06:22.887519 | orchestrator | changed: [testbed-manager] 2026-04-16 05:06:22.887530 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:06:22.887540 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:06:22.887551 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:06:22.887562 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:06:22.887573 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:06:22.887583 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:06:22.887594 | orchestrator | 2026-04-16 05:06:22.887605 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-04-16 05:06:22.887616 | orchestrator | Thursday 16 April 2026 05:06:11 +0000 (0:00:01.434) 0:00:10.421 ******** 2026-04-16 05:06:22.887627 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 05:06:22.887637 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 05:06:22.887648 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:06:22.887659 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 05:06:22.887670 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 05:06:22.887690 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 05:06:22.887701 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 05:06:22.887711 | orchestrator | 2026-04-16 05:06:22.887722 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-04-16 05:06:22.887733 | orchestrator | Thursday 16 April 2026 05:06:12 +0000 (0:00:01.431) 0:00:11.853 ******** 2026-04-16 05:06:22.887744 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.887755 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.887766 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.887777 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.887787 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.887798 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.887809 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.887820 | orchestrator | 2026-04-16 05:06:22.887831 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-04-16 05:06:22.887861 | orchestrator | Thursday 16 April 2026 05:06:13 +0000 (0:00:00.965) 0:00:12.818 ******** 2026-04-16 05:06:22.887873 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:06:22.887884 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:06:22.887894 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:06:22.887905 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:06:22.887916 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:06:22.887927 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:06:22.887937 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:06:22.887948 | orchestrator | 2026-04-16 05:06:22.887959 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-04-16 05:06:22.887970 | orchestrator | Thursday 16 April 2026 05:06:14 +0000 (0:00:00.574) 0:00:13.393 ******** 2026-04-16 05:06:22.887981 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.887992 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.888003 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.888014 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.888024 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.888035 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.888046 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.888056 | orchestrator | 2026-04-16 05:06:22.888067 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-04-16 05:06:22.888078 | orchestrator | Thursday 16 April 2026 05:06:16 +0000 (0:00:02.078) 0:00:15.471 ******** 2026-04-16 05:06:22.888089 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:06:22.888100 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:06:22.888111 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:06:22.888122 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:06:22.888133 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:06:22.888143 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:06:22.888155 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-04-16 05:06:22.888167 | orchestrator | 2026-04-16 05:06:22.888178 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-04-16 05:06:22.888189 | orchestrator | Thursday 16 April 2026 05:06:17 +0000 (0:00:00.733) 0:00:16.205 ******** 2026-04-16 05:06:22.888217 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.888229 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:06:22.888239 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:06:22.888250 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:06:22.888261 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:06:22.888272 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:06:22.888282 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:06:22.888293 | orchestrator | 2026-04-16 05:06:22.888304 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-04-16 05:06:22.888315 | orchestrator | Thursday 16 April 2026 05:06:18 +0000 (0:00:01.453) 0:00:17.658 ******** 2026-04-16 05:06:22.888326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:06:22.888346 | orchestrator | 2026-04-16 05:06:22.888357 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-16 05:06:22.888368 | orchestrator | Thursday 16 April 2026 05:06:19 +0000 (0:00:01.274) 0:00:18.933 ******** 2026-04-16 05:06:22.888379 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.888390 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.888400 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.888411 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.888426 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.888438 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.888448 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.888459 | orchestrator | 2026-04-16 05:06:22.888470 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-04-16 05:06:22.888481 | orchestrator | Thursday 16 April 2026 05:06:20 +0000 (0:00:00.964) 0:00:19.897 ******** 2026-04-16 05:06:22.888492 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:22.888502 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:22.888513 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:22.888524 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:22.888534 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:22.888545 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:22.888555 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:22.888566 | orchestrator | 2026-04-16 05:06:22.888577 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-16 05:06:22.888588 | orchestrator | Thursday 16 April 2026 05:06:21 +0000 (0:00:00.796) 0:00:20.694 ******** 2026-04-16 05:06:22.888599 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888610 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888620 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888631 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888641 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888652 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888663 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888674 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888684 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888695 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-04-16 05:06:22.888706 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888716 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888727 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888738 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-04-16 05:06:22.888749 | orchestrator | 2026-04-16 05:06:22.888767 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-04-16 05:06:37.215675 | orchestrator | Thursday 16 April 2026 05:06:22 +0000 (0:00:01.216) 0:00:21.911 ******** 2026-04-16 05:06:37.215803 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:06:37.215821 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:06:37.215832 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:06:37.215842 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:06:37.215852 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:06:37.215861 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:06:37.215871 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:06:37.215880 | orchestrator | 2026-04-16 05:06:37.215891 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-04-16 05:06:37.215923 | orchestrator | Thursday 16 April 2026 05:06:23 +0000 (0:00:00.652) 0:00:22.564 ******** 2026-04-16 05:06:37.215936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-2, testbed-node-1, testbed-node-5, testbed-node-4, testbed-node-3 2026-04-16 05:06:37.215948 | orchestrator | 2026-04-16 05:06:37.215958 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-04-16 05:06:37.215968 | orchestrator | Thursday 16 April 2026 05:06:27 +0000 (0:00:03.882) 0:00:26.447 ******** 2026-04-16 05:06:37.215979 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216002 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216061 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216127 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216213 | orchestrator | 2026-04-16 05:06:37.216231 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-04-16 05:06:37.216248 | orchestrator | Thursday 16 April 2026 05:06:32 +0000 (0:00:04.827) 0:00:31.274 ******** 2026-04-16 05:06:37.216265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216283 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216355 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-04-16 05:06:37.216390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:37.216441 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:42.526668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-04-16 05:06:42.526777 | orchestrator | 2026-04-16 05:06:42.526794 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-04-16 05:06:42.526806 | orchestrator | Thursday 16 April 2026 05:06:37 +0000 (0:00:04.967) 0:00:36.242 ******** 2026-04-16 05:06:42.526818 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:06:42.526829 | orchestrator | 2026-04-16 05:06:42.526846 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-04-16 05:06:42.526863 | orchestrator | Thursday 16 April 2026 05:06:38 +0000 (0:00:01.065) 0:00:37.307 ******** 2026-04-16 05:06:42.526879 | orchestrator | ok: [testbed-manager] 2026-04-16 05:06:42.526898 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:06:42.526915 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:06:42.526932 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:06:42.526949 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:06:42.526967 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:06:42.526985 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:06:42.527001 | orchestrator | 2026-04-16 05:06:42.527017 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-04-16 05:06:42.527027 | orchestrator | Thursday 16 April 2026 05:06:39 +0000 (0:00:00.992) 0:00:38.300 ******** 2026-04-16 05:06:42.527037 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527047 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527057 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527066 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527076 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:06:42.527086 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527095 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527105 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527114 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527124 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:06:42.527133 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527192 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527206 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527218 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527229 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:06:42.527259 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527270 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527281 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527293 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527304 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527317 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527329 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527340 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527351 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:06:42.527363 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527374 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527386 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527397 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527408 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:06:42.527419 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:06:42.527431 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-04-16 05:06:42.527442 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-04-16 05:06:42.527452 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-04-16 05:06:42.527467 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-04-16 05:06:42.527484 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:06:42.527518 | orchestrator | 2026-04-16 05:06:42.527548 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-04-16 05:06:42.527588 | orchestrator | Thursday 16 April 2026 05:06:41 +0000 (0:00:01.794) 0:00:40.094 ******** 2026-04-16 05:06:42.527606 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:06:42.527618 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:06:42.527627 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:06:42.527637 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:06:42.527646 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:06:42.527656 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:06:42.527665 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:06:42.527674 | orchestrator | 2026-04-16 05:06:42.527684 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-04-16 05:06:42.527694 | orchestrator | Thursday 16 April 2026 05:06:41 +0000 (0:00:00.576) 0:00:40.670 ******** 2026-04-16 05:06:42.527703 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:06:42.527718 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:06:42.527734 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:06:42.527751 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:06:42.527768 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:06:42.527784 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:06:42.527800 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:06:42.527816 | orchestrator | 2026-04-16 05:06:42.527832 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:06:42.527850 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 05:06:42.527869 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 05:06:42.527899 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 05:06:42.527912 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 05:06:42.527922 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 05:06:42.527931 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 05:06:42.527940 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 05:06:42.527950 | orchestrator | 2026-04-16 05:06:42.527959 | orchestrator | 2026-04-16 05:06:42.527969 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:06:42.527978 | orchestrator | Thursday 16 April 2026 05:06:42 +0000 (0:00:00.631) 0:00:41.302 ******** 2026-04-16 05:06:42.528003 | orchestrator | =============================================================================== 2026-04-16 05:06:42.528019 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.97s 2026-04-16 05:06:42.528035 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.83s 2026-04-16 05:06:42.528051 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.88s 2026-04-16 05:06:42.528068 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 2.72s 2026-04-16 05:06:42.528084 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.08s 2026-04-16 05:06:42.528101 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.01s 2026-04-16 05:06:42.528119 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.79s 2026-04-16 05:06:42.528136 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.78s 2026-04-16 05:06:42.528179 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.45s 2026-04-16 05:06:42.528194 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.43s 2026-04-16 05:06:42.528209 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.43s 2026-04-16 05:06:42.528223 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.27s 2026-04-16 05:06:42.528239 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.22s 2026-04-16 05:06:42.528256 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.07s 2026-04-16 05:06:42.528274 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2026-04-16 05:06:42.528289 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 0.97s 2026-04-16 05:06:42.528306 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.96s 2026-04-16 05:06:42.528315 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 0.87s 2026-04-16 05:06:42.528325 | orchestrator | osism.commons.network : Create required directories --------------------- 0.81s 2026-04-16 05:06:42.528334 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.80s 2026-04-16 05:06:42.718083 | orchestrator | + osism apply wireguard 2026-04-16 05:06:54.784029 | orchestrator | 2026-04-16 05:06:54 | INFO  | Task 58fa5b36-2162-4928-a53a-db92bba617e6 (wireguard) was prepared for execution. 2026-04-16 05:06:54.784203 | orchestrator | 2026-04-16 05:06:54 | INFO  | It takes a moment until task 58fa5b36-2162-4928-a53a-db92bba617e6 (wireguard) has been started and output is visible here. 2026-04-16 05:07:12.209238 | orchestrator | 2026-04-16 05:07:12.209358 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-04-16 05:07:12.209405 | orchestrator | 2026-04-16 05:07:12.209427 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-04-16 05:07:12.209445 | orchestrator | Thursday 16 April 2026 05:06:58 +0000 (0:00:00.161) 0:00:00.161 ******** 2026-04-16 05:07:12.209463 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:12.209483 | orchestrator | 2026-04-16 05:07:12.209500 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-04-16 05:07:12.209518 | orchestrator | Thursday 16 April 2026 05:06:59 +0000 (0:00:01.205) 0:00:01.366 ******** 2026-04-16 05:07:12.209537 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.209555 | orchestrator | 2026-04-16 05:07:12.209573 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-04-16 05:07:12.209585 | orchestrator | Thursday 16 April 2026 05:07:05 +0000 (0:00:05.187) 0:00:06.554 ******** 2026-04-16 05:07:12.209596 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.209607 | orchestrator | 2026-04-16 05:07:12.209618 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-04-16 05:07:12.209628 | orchestrator | Thursday 16 April 2026 05:07:05 +0000 (0:00:00.522) 0:00:07.076 ******** 2026-04-16 05:07:12.209639 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.209650 | orchestrator | 2026-04-16 05:07:12.209661 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-04-16 05:07:12.209671 | orchestrator | Thursday 16 April 2026 05:07:05 +0000 (0:00:00.412) 0:00:07.488 ******** 2026-04-16 05:07:12.209682 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:12.209693 | orchestrator | 2026-04-16 05:07:12.209704 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-04-16 05:07:12.209715 | orchestrator | Thursday 16 April 2026 05:07:06 +0000 (0:00:00.625) 0:00:08.114 ******** 2026-04-16 05:07:12.209726 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:12.209739 | orchestrator | 2026-04-16 05:07:12.209751 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-04-16 05:07:12.209764 | orchestrator | Thursday 16 April 2026 05:07:07 +0000 (0:00:00.404) 0:00:08.518 ******** 2026-04-16 05:07:12.209776 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:12.209788 | orchestrator | 2026-04-16 05:07:12.209800 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-04-16 05:07:12.209813 | orchestrator | Thursday 16 April 2026 05:07:07 +0000 (0:00:00.407) 0:00:08.925 ******** 2026-04-16 05:07:12.209831 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.209849 | orchestrator | 2026-04-16 05:07:12.209869 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-04-16 05:07:12.209886 | orchestrator | Thursday 16 April 2026 05:07:08 +0000 (0:00:01.114) 0:00:10.039 ******** 2026-04-16 05:07:12.209904 | orchestrator | changed: [testbed-manager] => (item=None) 2026-04-16 05:07:12.209924 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.209945 | orchestrator | 2026-04-16 05:07:12.209965 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-04-16 05:07:12.209984 | orchestrator | Thursday 16 April 2026 05:07:09 +0000 (0:00:00.896) 0:00:10.935 ******** 2026-04-16 05:07:12.210003 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.210152 | orchestrator | 2026-04-16 05:07:12.210169 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-04-16 05:07:12.210181 | orchestrator | Thursday 16 April 2026 05:07:11 +0000 (0:00:01.580) 0:00:12.516 ******** 2026-04-16 05:07:12.210192 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:12.210203 | orchestrator | 2026-04-16 05:07:12.210214 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:07:12.210226 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:07:12.210238 | orchestrator | 2026-04-16 05:07:12.210249 | orchestrator | 2026-04-16 05:07:12.210260 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:07:12.210284 | orchestrator | Thursday 16 April 2026 05:07:11 +0000 (0:00:00.878) 0:00:13.394 ******** 2026-04-16 05:07:12.210296 | orchestrator | =============================================================================== 2026-04-16 05:07:12.210307 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.19s 2026-04-16 05:07:12.210318 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.58s 2026-04-16 05:07:12.210329 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.21s 2026-04-16 05:07:12.210340 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.11s 2026-04-16 05:07:12.210351 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2026-04-16 05:07:12.210362 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.88s 2026-04-16 05:07:12.210387 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.63s 2026-04-16 05:07:12.210398 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2026-04-16 05:07:12.210426 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2026-04-16 05:07:12.210445 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2026-04-16 05:07:12.210463 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-04-16 05:07:12.467379 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-04-16 05:07:12.499692 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-04-16 05:07:12.499784 | orchestrator | Dload Upload Total Spent Left Speed 2026-04-16 05:07:12.578550 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 13 100 13 0 0 164 0 --:--:-- --:--:-- --:--:-- 166 2026-04-16 05:07:12.592375 | orchestrator | + osism apply --environment custom workarounds 2026-04-16 05:07:14.464419 | orchestrator | 2026-04-16 05:07:14 | INFO  | Trying to run play workarounds in environment custom 2026-04-16 05:07:24.685473 | orchestrator | 2026-04-16 05:07:24 | INFO  | Task ac5d3cb5-7e23-448a-b7ee-df230058eb28 (workarounds) was prepared for execution. 2026-04-16 05:07:24.685609 | orchestrator | 2026-04-16 05:07:24 | INFO  | It takes a moment until task ac5d3cb5-7e23-448a-b7ee-df230058eb28 (workarounds) has been started and output is visible here. 2026-04-16 05:07:49.872419 | orchestrator | 2026-04-16 05:07:49.872539 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:07:49.872557 | orchestrator | 2026-04-16 05:07:49.872569 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-04-16 05:07:49.872581 | orchestrator | Thursday 16 April 2026 05:07:28 +0000 (0:00:00.091) 0:00:00.091 ******** 2026-04-16 05:07:49.872592 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872604 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872615 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872626 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872637 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872647 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872658 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-04-16 05:07:49.872669 | orchestrator | 2026-04-16 05:07:49.872680 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-04-16 05:07:49.872691 | orchestrator | 2026-04-16 05:07:49.872702 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-16 05:07:49.872713 | orchestrator | Thursday 16 April 2026 05:07:29 +0000 (0:00:00.548) 0:00:00.640 ******** 2026-04-16 05:07:49.872724 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:49.872736 | orchestrator | 2026-04-16 05:07:49.872771 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-04-16 05:07:49.872783 | orchestrator | 2026-04-16 05:07:49.872794 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-04-16 05:07:49.872805 | orchestrator | Thursday 16 April 2026 05:07:31 +0000 (0:00:01.946) 0:00:02.586 ******** 2026-04-16 05:07:49.872817 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:07:49.872828 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:07:49.872839 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:07:49.872849 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:07:49.872860 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:07:49.872871 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:07:49.872881 | orchestrator | 2026-04-16 05:07:49.872892 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-04-16 05:07:49.872903 | orchestrator | 2026-04-16 05:07:49.872914 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-04-16 05:07:49.872941 | orchestrator | Thursday 16 April 2026 05:07:32 +0000 (0:00:01.830) 0:00:04.416 ******** 2026-04-16 05:07:49.872953 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-16 05:07:49.872966 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-16 05:07:49.872979 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-16 05:07:49.873058 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-16 05:07:49.873075 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-16 05:07:49.873087 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-04-16 05:07:49.873100 | orchestrator | 2026-04-16 05:07:49.873113 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-04-16 05:07:49.873125 | orchestrator | Thursday 16 April 2026 05:07:34 +0000 (0:00:01.487) 0:00:05.904 ******** 2026-04-16 05:07:49.873138 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:07:49.873151 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:07:49.873164 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:07:49.873176 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:07:49.873189 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:07:49.873202 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:07:49.873214 | orchestrator | 2026-04-16 05:07:49.873227 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-04-16 05:07:49.873240 | orchestrator | Thursday 16 April 2026 05:07:38 +0000 (0:00:03.693) 0:00:09.598 ******** 2026-04-16 05:07:49.873252 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:07:49.873264 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:07:49.873277 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:07:49.873291 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:07:49.873303 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:07:49.873315 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:07:49.873326 | orchestrator | 2026-04-16 05:07:49.873337 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-04-16 05:07:49.873348 | orchestrator | 2026-04-16 05:07:49.873358 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-04-16 05:07:49.873370 | orchestrator | Thursday 16 April 2026 05:07:38 +0000 (0:00:00.619) 0:00:10.217 ******** 2026-04-16 05:07:49.873381 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:07:49.873391 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:07:49.873402 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:07:49.873413 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:07:49.873424 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:07:49.873434 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:07:49.873445 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:49.873465 | orchestrator | 2026-04-16 05:07:49.873476 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-04-16 05:07:49.873487 | orchestrator | Thursday 16 April 2026 05:07:40 +0000 (0:00:01.466) 0:00:11.683 ******** 2026-04-16 05:07:49.873498 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:07:49.873508 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:07:49.873519 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:07:49.873530 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:07:49.873541 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:07:49.873552 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:07:49.873583 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:49.873594 | orchestrator | 2026-04-16 05:07:49.873605 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-04-16 05:07:49.873616 | orchestrator | Thursday 16 April 2026 05:07:41 +0000 (0:00:01.534) 0:00:13.218 ******** 2026-04-16 05:07:49.873627 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:07:49.873643 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:07:49.873661 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:07:49.873680 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:07:49.873696 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:07:49.873713 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:07:49.873731 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:49.873748 | orchestrator | 2026-04-16 05:07:49.873765 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-04-16 05:07:49.873783 | orchestrator | Thursday 16 April 2026 05:07:43 +0000 (0:00:01.598) 0:00:14.816 ******** 2026-04-16 05:07:49.873802 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:07:49.873820 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:07:49.873838 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:07:49.873857 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:07:49.873961 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:07:49.873979 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:07:49.873990 | orchestrator | changed: [testbed-manager] 2026-04-16 05:07:49.874134 | orchestrator | 2026-04-16 05:07:49.874146 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-04-16 05:07:49.874157 | orchestrator | Thursday 16 April 2026 05:07:46 +0000 (0:00:02.762) 0:00:17.579 ******** 2026-04-16 05:07:49.874169 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:07:49.874179 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:07:49.874190 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:07:49.874201 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:07:49.874212 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:07:49.874223 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:07:49.874233 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:07:49.874244 | orchestrator | 2026-04-16 05:07:49.874255 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-04-16 05:07:49.874266 | orchestrator | 2026-04-16 05:07:49.874277 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-04-16 05:07:49.874288 | orchestrator | Thursday 16 April 2026 05:07:46 +0000 (0:00:00.692) 0:00:18.272 ******** 2026-04-16 05:07:49.874299 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:07:49.874309 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:07:49.874320 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:07:49.874331 | orchestrator | ok: [testbed-manager] 2026-04-16 05:07:49.874342 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:07:49.874362 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:07:49.874373 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:07:49.874384 | orchestrator | 2026-04-16 05:07:49.874395 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:07:49.874407 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:07:49.874419 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:07:49.874441 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:07:49.874452 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:07:49.874463 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:07:49.874473 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:07:49.874484 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:07:49.874495 | orchestrator | 2026-04-16 05:07:49.874506 | orchestrator | 2026-04-16 05:07:49.874517 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:07:49.874528 | orchestrator | Thursday 16 April 2026 05:07:49 +0000 (0:00:03.116) 0:00:21.389 ******** 2026-04-16 05:07:49.874539 | orchestrator | =============================================================================== 2026-04-16 05:07:49.874550 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.69s 2026-04-16 05:07:49.874560 | orchestrator | Install python3-docker -------------------------------------------------- 3.12s 2026-04-16 05:07:49.874571 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.76s 2026-04-16 05:07:49.874582 | orchestrator | Apply netplan configuration --------------------------------------------- 1.95s 2026-04-16 05:07:49.874593 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2026-04-16 05:07:49.874604 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2026-04-16 05:07:49.874615 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.53s 2026-04-16 05:07:49.874630 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2026-04-16 05:07:49.874648 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.47s 2026-04-16 05:07:49.874665 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.69s 2026-04-16 05:07:49.874678 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.62s 2026-04-16 05:07:49.874721 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.55s 2026-04-16 05:07:50.432506 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-04-16 05:08:02.439169 | orchestrator | 2026-04-16 05:08:02 | INFO  | Task 85634055-d0fc-487a-b0b9-5c8fe830cfcf (reboot) was prepared for execution. 2026-04-16 05:08:02.439267 | orchestrator | 2026-04-16 05:08:02 | INFO  | It takes a moment until task 85634055-d0fc-487a-b0b9-5c8fe830cfcf (reboot) has been started and output is visible here. 2026-04-16 05:08:11.488415 | orchestrator | 2026-04-16 05:08:11.488531 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-16 05:08:11.488557 | orchestrator | 2026-04-16 05:08:11.488577 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-16 05:08:11.488594 | orchestrator | Thursday 16 April 2026 05:08:06 +0000 (0:00:00.147) 0:00:00.147 ******** 2026-04-16 05:08:11.488613 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:08:11.488626 | orchestrator | 2026-04-16 05:08:11.488636 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-16 05:08:11.488646 | orchestrator | Thursday 16 April 2026 05:08:06 +0000 (0:00:00.076) 0:00:00.223 ******** 2026-04-16 05:08:11.488656 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:08:11.488665 | orchestrator | 2026-04-16 05:08:11.488675 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-16 05:08:11.488707 | orchestrator | Thursday 16 April 2026 05:08:07 +0000 (0:00:00.818) 0:00:01.042 ******** 2026-04-16 05:08:11.488717 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:08:11.488727 | orchestrator | 2026-04-16 05:08:11.488737 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-16 05:08:11.488746 | orchestrator | 2026-04-16 05:08:11.488756 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-16 05:08:11.488766 | orchestrator | Thursday 16 April 2026 05:08:07 +0000 (0:00:00.098) 0:00:01.140 ******** 2026-04-16 05:08:11.488775 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:08:11.488785 | orchestrator | 2026-04-16 05:08:11.488795 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-16 05:08:11.488804 | orchestrator | Thursday 16 April 2026 05:08:07 +0000 (0:00:00.087) 0:00:01.227 ******** 2026-04-16 05:08:11.488814 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:08:11.488823 | orchestrator | 2026-04-16 05:08:11.488833 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-16 05:08:11.488855 | orchestrator | Thursday 16 April 2026 05:08:07 +0000 (0:00:00.612) 0:00:01.839 ******** 2026-04-16 05:08:11.488865 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:08:11.488875 | orchestrator | 2026-04-16 05:08:11.488885 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-16 05:08:11.488894 | orchestrator | 2026-04-16 05:08:11.488904 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-16 05:08:11.488913 | orchestrator | Thursday 16 April 2026 05:08:07 +0000 (0:00:00.100) 0:00:01.940 ******** 2026-04-16 05:08:11.488923 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:08:11.488932 | orchestrator | 2026-04-16 05:08:11.488942 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-16 05:08:11.488980 | orchestrator | Thursday 16 April 2026 05:08:08 +0000 (0:00:00.178) 0:00:02.118 ******** 2026-04-16 05:08:11.488998 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:08:11.489015 | orchestrator | 2026-04-16 05:08:11.489032 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-16 05:08:11.489048 | orchestrator | Thursday 16 April 2026 05:08:08 +0000 (0:00:00.607) 0:00:02.726 ******** 2026-04-16 05:08:11.489063 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:08:11.489079 | orchestrator | 2026-04-16 05:08:11.489095 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-16 05:08:11.489109 | orchestrator | 2026-04-16 05:08:11.489124 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-16 05:08:11.489141 | orchestrator | Thursday 16 April 2026 05:08:08 +0000 (0:00:00.098) 0:00:02.824 ******** 2026-04-16 05:08:11.489158 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:08:11.489175 | orchestrator | 2026-04-16 05:08:11.489191 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-16 05:08:11.489208 | orchestrator | Thursday 16 April 2026 05:08:08 +0000 (0:00:00.078) 0:00:02.903 ******** 2026-04-16 05:08:11.489226 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:08:11.489241 | orchestrator | 2026-04-16 05:08:11.489256 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-16 05:08:11.489273 | orchestrator | Thursday 16 April 2026 05:08:09 +0000 (0:00:00.616) 0:00:03.520 ******** 2026-04-16 05:08:11.489288 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:08:11.489304 | orchestrator | 2026-04-16 05:08:11.489319 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-16 05:08:11.489336 | orchestrator | 2026-04-16 05:08:11.489353 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-16 05:08:11.489370 | orchestrator | Thursday 16 April 2026 05:08:09 +0000 (0:00:00.097) 0:00:03.618 ******** 2026-04-16 05:08:11.489386 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:08:11.489402 | orchestrator | 2026-04-16 05:08:11.489415 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-16 05:08:11.489426 | orchestrator | Thursday 16 April 2026 05:08:09 +0000 (0:00:00.083) 0:00:03.702 ******** 2026-04-16 05:08:11.489447 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:08:11.489457 | orchestrator | 2026-04-16 05:08:11.489467 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-16 05:08:11.489476 | orchestrator | Thursday 16 April 2026 05:08:10 +0000 (0:00:00.613) 0:00:04.315 ******** 2026-04-16 05:08:11.489486 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:08:11.489495 | orchestrator | 2026-04-16 05:08:11.489506 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-04-16 05:08:11.489515 | orchestrator | 2026-04-16 05:08:11.489525 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-04-16 05:08:11.489535 | orchestrator | Thursday 16 April 2026 05:08:10 +0000 (0:00:00.107) 0:00:04.423 ******** 2026-04-16 05:08:11.489544 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:08:11.489554 | orchestrator | 2026-04-16 05:08:11.489563 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-04-16 05:08:11.489573 | orchestrator | Thursday 16 April 2026 05:08:10 +0000 (0:00:00.099) 0:00:04.523 ******** 2026-04-16 05:08:11.489582 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:08:11.489592 | orchestrator | 2026-04-16 05:08:11.489601 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-04-16 05:08:11.489611 | orchestrator | Thursday 16 April 2026 05:08:11 +0000 (0:00:00.665) 0:00:05.189 ******** 2026-04-16 05:08:11.489639 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:08:11.489649 | orchestrator | 2026-04-16 05:08:11.489659 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:08:11.489670 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:08:11.489681 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:08:11.489691 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:08:11.489701 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:08:11.489710 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:08:11.489720 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:08:11.489729 | orchestrator | 2026-04-16 05:08:11.489739 | orchestrator | 2026-04-16 05:08:11.489748 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:08:11.489758 | orchestrator | Thursday 16 April 2026 05:08:11 +0000 (0:00:00.038) 0:00:05.227 ******** 2026-04-16 05:08:11.489775 | orchestrator | =============================================================================== 2026-04-16 05:08:11.489785 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 3.93s 2026-04-16 05:08:11.489795 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.60s 2026-04-16 05:08:11.489804 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.54s 2026-04-16 05:08:11.766235 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-04-16 05:08:23.724799 | orchestrator | 2026-04-16 05:08:23 | INFO  | Task aaacf14f-b667-4ed4-9a7f-48158ba51768 (wait-for-connection) was prepared for execution. 2026-04-16 05:08:23.724911 | orchestrator | 2026-04-16 05:08:23 | INFO  | It takes a moment until task aaacf14f-b667-4ed4-9a7f-48158ba51768 (wait-for-connection) has been started and output is visible here. 2026-04-16 05:08:39.316041 | orchestrator | 2026-04-16 05:08:39.316179 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-04-16 05:08:39.316195 | orchestrator | 2026-04-16 05:08:39.316205 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-04-16 05:08:39.316216 | orchestrator | Thursday 16 April 2026 05:08:27 +0000 (0:00:00.164) 0:00:00.164 ******** 2026-04-16 05:08:39.316227 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:08:39.316238 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:08:39.316247 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:08:39.316257 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:08:39.316267 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:08:39.316276 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:08:39.316285 | orchestrator | 2026-04-16 05:08:39.316295 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:08:39.316306 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:08:39.316318 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:08:39.316327 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:08:39.316337 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:08:39.316347 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:08:39.316357 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:08:39.316380 | orchestrator | 2026-04-16 05:08:39.316391 | orchestrator | 2026-04-16 05:08:39.316401 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:08:39.316411 | orchestrator | Thursday 16 April 2026 05:08:38 +0000 (0:00:11.447) 0:00:11.611 ******** 2026-04-16 05:08:39.316421 | orchestrator | =============================================================================== 2026-04-16 05:08:39.316431 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.45s 2026-04-16 05:08:39.577878 | orchestrator | + osism apply hddtemp 2026-04-16 05:08:51.615233 | orchestrator | 2026-04-16 05:08:51 | INFO  | Task a3ffeb6b-8af4-4602-9f78-f46d2352de0b (hddtemp) was prepared for execution. 2026-04-16 05:08:51.615339 | orchestrator | 2026-04-16 05:08:51 | INFO  | It takes a moment until task a3ffeb6b-8af4-4602-9f78-f46d2352de0b (hddtemp) has been started and output is visible here. 2026-04-16 05:09:19.164649 | orchestrator | 2026-04-16 05:09:19.164783 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-04-16 05:09:19.164801 | orchestrator | 2026-04-16 05:09:19.164814 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-04-16 05:09:19.164898 | orchestrator | Thursday 16 April 2026 05:08:55 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-04-16 05:09:19.164913 | orchestrator | ok: [testbed-manager] 2026-04-16 05:09:19.164930 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:09:19.164950 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:09:19.164968 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:09:19.164987 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:09:19.165008 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:09:19.165024 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:09:19.165035 | orchestrator | 2026-04-16 05:09:19.165047 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-04-16 05:09:19.165058 | orchestrator | Thursday 16 April 2026 05:08:56 +0000 (0:00:00.642) 0:00:00.887 ******** 2026-04-16 05:09:19.165070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:09:19.165109 | orchestrator | 2026-04-16 05:09:19.165121 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-04-16 05:09:19.165132 | orchestrator | Thursday 16 April 2026 05:08:57 +0000 (0:00:01.092) 0:00:01.980 ******** 2026-04-16 05:09:19.165142 | orchestrator | ok: [testbed-manager] 2026-04-16 05:09:19.165153 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:09:19.165164 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:09:19.165175 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:09:19.165187 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:09:19.165200 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:09:19.165214 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:09:19.165226 | orchestrator | 2026-04-16 05:09:19.165239 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-04-16 05:09:19.165267 | orchestrator | Thursday 16 April 2026 05:08:59 +0000 (0:00:01.906) 0:00:03.887 ******** 2026-04-16 05:09:19.165279 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:09:19.165293 | orchestrator | changed: [testbed-manager] 2026-04-16 05:09:19.165305 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:09:19.165318 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:09:19.165330 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:09:19.165343 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:09:19.165355 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:09:19.165368 | orchestrator | 2026-04-16 05:09:19.165381 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-04-16 05:09:19.165394 | orchestrator | Thursday 16 April 2026 05:09:00 +0000 (0:00:01.118) 0:00:05.006 ******** 2026-04-16 05:09:19.165406 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:09:19.165418 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:09:19.165430 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:09:19.165443 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:09:19.165456 | orchestrator | ok: [testbed-manager] 2026-04-16 05:09:19.165468 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:09:19.165480 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:09:19.165492 | orchestrator | 2026-04-16 05:09:19.165505 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-04-16 05:09:19.165519 | orchestrator | Thursday 16 April 2026 05:09:01 +0000 (0:00:01.134) 0:00:06.140 ******** 2026-04-16 05:09:19.165531 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:09:19.165543 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:09:19.165554 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:09:19.165564 | orchestrator | changed: [testbed-manager] 2026-04-16 05:09:19.165575 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:09:19.165586 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:09:19.165596 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:09:19.165607 | orchestrator | 2026-04-16 05:09:19.165618 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-04-16 05:09:19.165629 | orchestrator | Thursday 16 April 2026 05:09:02 +0000 (0:00:00.760) 0:00:06.900 ******** 2026-04-16 05:09:19.165640 | orchestrator | changed: [testbed-manager] 2026-04-16 05:09:19.165651 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:09:19.165662 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:09:19.165672 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:09:19.165683 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:09:19.165694 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:09:19.165705 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:09:19.165715 | orchestrator | 2026-04-16 05:09:19.165726 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-04-16 05:09:19.165737 | orchestrator | Thursday 16 April 2026 05:09:15 +0000 (0:00:13.386) 0:00:20.287 ******** 2026-04-16 05:09:19.165748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:09:19.165759 | orchestrator | 2026-04-16 05:09:19.165779 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-04-16 05:09:19.165790 | orchestrator | Thursday 16 April 2026 05:09:17 +0000 (0:00:01.132) 0:00:21.419 ******** 2026-04-16 05:09:19.165801 | orchestrator | changed: [testbed-manager] 2026-04-16 05:09:19.165812 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:09:19.165847 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:09:19.165860 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:09:19.165871 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:09:19.165882 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:09:19.165893 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:09:19.165904 | orchestrator | 2026-04-16 05:09:19.165915 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:09:19.165926 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:09:19.165958 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:09:19.165972 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:09:19.165991 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:09:19.166009 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:09:19.166101 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:09:19.166113 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:09:19.166124 | orchestrator | 2026-04-16 05:09:19.166135 | orchestrator | 2026-04-16 05:09:19.166146 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:09:19.166168 | orchestrator | Thursday 16 April 2026 05:09:18 +0000 (0:00:01.832) 0:00:23.251 ******** 2026-04-16 05:09:19.166180 | orchestrator | =============================================================================== 2026-04-16 05:09:19.166191 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.39s 2026-04-16 05:09:19.166201 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.91s 2026-04-16 05:09:19.166212 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.83s 2026-04-16 05:09:19.166229 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.13s 2026-04-16 05:09:19.166240 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.13s 2026-04-16 05:09:19.166251 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2026-04-16 05:09:19.166262 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.09s 2026-04-16 05:09:19.166273 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.76s 2026-04-16 05:09:19.166284 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.64s 2026-04-16 05:09:19.420670 | orchestrator | ++ semver 9.5.0 7.1.1 2026-04-16 05:09:19.463025 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 05:09:19.463131 | orchestrator | + sudo systemctl restart manager.service 2026-04-16 05:09:32.637553 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 05:09:32.637668 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-04-16 05:09:32.637686 | orchestrator | + local max_attempts=60 2026-04-16 05:09:32.637699 | orchestrator | + local name=ceph-ansible 2026-04-16 05:09:32.637905 | orchestrator | + local attempt_num=1 2026-04-16 05:09:32.637927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:09:32.669630 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:09:32.669724 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:09:32.669740 | orchestrator | + sleep 5 2026-04-16 05:09:37.677150 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:09:37.700108 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:09:37.700179 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:09:37.700188 | orchestrator | + sleep 5 2026-04-16 05:09:42.703086 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:09:42.738381 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:09:42.738508 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:09:42.738531 | orchestrator | + sleep 5 2026-04-16 05:09:47.741814 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:09:47.773718 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:09:47.773853 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:09:47.773871 | orchestrator | + sleep 5 2026-04-16 05:09:52.777820 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:09:52.813840 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:09:52.813922 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:09:52.813937 | orchestrator | + sleep 5 2026-04-16 05:09:57.817663 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:09:57.854286 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:09:57.854469 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:09:57.854487 | orchestrator | + sleep 5 2026-04-16 05:10:02.859063 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:02.898231 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:02.898331 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:02.898347 | orchestrator | + sleep 5 2026-04-16 05:10:07.903907 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:07.937288 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:07.937364 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:07.937379 | orchestrator | + sleep 5 2026-04-16 05:10:12.939614 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:12.961089 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:12.961177 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:12.961190 | orchestrator | + sleep 5 2026-04-16 05:10:17.964413 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:17.999944 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:18.000042 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:18.000059 | orchestrator | + sleep 5 2026-04-16 05:10:23.004255 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:23.038983 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:23.039116 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:23.039145 | orchestrator | + sleep 5 2026-04-16 05:10:28.042547 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:28.080345 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:28.080433 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:28.080448 | orchestrator | + sleep 5 2026-04-16 05:10:33.085893 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:33.123195 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:33.123292 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-04-16 05:10:33.123307 | orchestrator | + sleep 5 2026-04-16 05:10:38.128304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-04-16 05:10:38.163945 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:38.164041 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-16 05:10:38.164056 | orchestrator | + local max_attempts=60 2026-04-16 05:10:38.164069 | orchestrator | + local name=kolla-ansible 2026-04-16 05:10:38.164080 | orchestrator | + local attempt_num=1 2026-04-16 05:10:38.164268 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-16 05:10:38.197080 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:38.197339 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-16 05:10:38.197374 | orchestrator | + local max_attempts=60 2026-04-16 05:10:38.197435 | orchestrator | + local name=osism-ansible 2026-04-16 05:10:38.197471 | orchestrator | + local attempt_num=1 2026-04-16 05:10:38.197687 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-16 05:10:38.231977 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 05:10:38.232084 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-16 05:10:38.232108 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-04-16 05:10:38.388049 | orchestrator | ARA in ceph-ansible already disabled. 2026-04-16 05:10:38.523567 | orchestrator | ARA in kolla-ansible already disabled. 2026-04-16 05:10:38.644940 | orchestrator | ARA in osism-ansible already disabled. 2026-04-16 05:10:38.808352 | orchestrator | ARA in osism-kubernetes already disabled. 2026-04-16 05:10:38.809239 | orchestrator | + osism apply gather-facts 2026-04-16 05:10:50.977088 | orchestrator | 2026-04-16 05:10:50 | INFO  | Task 6f8e217e-d06a-4b66-9a18-3294c1c58a1d (gather-facts) was prepared for execution. 2026-04-16 05:10:50.977186 | orchestrator | 2026-04-16 05:10:50 | INFO  | It takes a moment until task 6f8e217e-d06a-4b66-9a18-3294c1c58a1d (gather-facts) has been started and output is visible here. 2026-04-16 05:11:04.203341 | orchestrator | 2026-04-16 05:11:04.203441 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 05:11:04.203453 | orchestrator | 2026-04-16 05:11:04.203463 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 05:11:04.203472 | orchestrator | Thursday 16 April 2026 05:10:54 +0000 (0:00:00.158) 0:00:00.158 ******** 2026-04-16 05:11:04.203480 | orchestrator | ok: [testbed-manager] 2026-04-16 05:11:04.203491 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:11:04.203499 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:11:04.203507 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:11:04.203515 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:11:04.203523 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:11:04.203531 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:11:04.203539 | orchestrator | 2026-04-16 05:11:04.203547 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-16 05:11:04.203555 | orchestrator | 2026-04-16 05:11:04.203563 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-16 05:11:04.203572 | orchestrator | Thursday 16 April 2026 05:11:03 +0000 (0:00:09.053) 0:00:09.211 ******** 2026-04-16 05:11:04.203580 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:11:04.203589 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:11:04.203597 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:11:04.203605 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:11:04.203613 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:11:04.203621 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:11:04.203629 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:11:04.203637 | orchestrator | 2026-04-16 05:11:04.203645 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:11:04.203653 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203711 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203721 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203729 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203736 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203744 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203751 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:11:04.203780 | orchestrator | 2026-04-16 05:11:04.203788 | orchestrator | 2026-04-16 05:11:04.203795 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:11:04.203803 | orchestrator | Thursday 16 April 2026 05:11:03 +0000 (0:00:00.449) 0:00:09.661 ******** 2026-04-16 05:11:04.203810 | orchestrator | =============================================================================== 2026-04-16 05:11:04.203817 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.05s 2026-04-16 05:11:04.203825 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-04-16 05:11:04.376368 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-04-16 05:11:04.393519 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-04-16 05:11:04.402265 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-04-16 05:11:04.416699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-04-16 05:11:04.434244 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-04-16 05:11:04.450248 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-04-16 05:11:04.466930 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-04-16 05:11:04.476700 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-04-16 05:11:04.484546 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-04-16 05:11:04.492199 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-04-16 05:11:04.500213 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-04-16 05:11:04.516153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-04-16 05:11:04.537403 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-04-16 05:11:04.546593 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-04-16 05:11:04.558431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-04-16 05:11:04.567833 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-04-16 05:11:04.577816 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-04-16 05:11:04.586427 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-04-16 05:11:04.596856 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-04-16 05:11:04.606157 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh /usr/local/bin/bootstrap-octavia 2026-04-16 05:11:04.614770 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-04-16 05:11:04.621921 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-04-16 05:11:04.628994 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-04-16 05:11:04.636427 | orchestrator | + [[ false == \t\r\u\e ]] 2026-04-16 05:11:04.861430 | orchestrator | ok: Runtime: 0:23:23.583571 2026-04-16 05:11:04.965197 | 2026-04-16 05:11:04.965342 | TASK [Deploy services] 2026-04-16 05:11:05.639467 | orchestrator | 2026-04-16 05:11:05.639702 | orchestrator | # DEPLOY SERVICES 2026-04-16 05:11:05.639730 | orchestrator | 2026-04-16 05:11:05.639745 | orchestrator | + set -e 2026-04-16 05:11:05.639759 | orchestrator | + echo 2026-04-16 05:11:05.639773 | orchestrator | + echo '# DEPLOY SERVICES' 2026-04-16 05:11:05.639786 | orchestrator | + echo 2026-04-16 05:11:05.639831 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 05:11:05.639854 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 05:11:05.639870 | orchestrator | ++ INTERACTIVE=false 2026-04-16 05:11:05.639881 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 05:11:05.639903 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 05:11:05.639930 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 05:11:05.639946 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 05:11:05.639958 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 05:11:05.639975 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 05:11:05.639986 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 05:11:05.640001 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 05:11:05.640013 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 05:11:05.640027 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 05:11:05.640038 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 05:11:05.640049 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 05:11:05.640061 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 05:11:05.640072 | orchestrator | ++ export ARA=false 2026-04-16 05:11:05.640083 | orchestrator | ++ ARA=false 2026-04-16 05:11:05.640094 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 05:11:05.640105 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 05:11:05.640116 | orchestrator | ++ export TEMPEST=false 2026-04-16 05:11:05.640127 | orchestrator | ++ TEMPEST=false 2026-04-16 05:11:05.640138 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 05:11:05.640149 | orchestrator | ++ IS_ZUUL=true 2026-04-16 05:11:05.640159 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 05:11:05.640170 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 05:11:05.640182 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 05:11:05.640192 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 05:11:05.640203 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 05:11:05.640214 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 05:11:05.640225 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 05:11:05.640236 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 05:11:05.640247 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 05:11:05.640264 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 05:11:05.640276 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-16 05:11:05.646958 | orchestrator | + set -e 2026-04-16 05:11:05.647030 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 05:11:05.647045 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 05:11:05.647057 | orchestrator | ++ INTERACTIVE=false 2026-04-16 05:11:05.647068 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 05:11:05.647079 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 05:11:05.647089 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 05:11:05.647100 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 05:11:05.647111 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 05:11:05.647122 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 05:11:05.647133 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 05:11:05.647144 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 05:11:05.647154 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 05:11:05.647165 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 05:11:05.647176 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 05:11:05.647187 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 05:11:05.647198 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 05:11:05.647209 | orchestrator | ++ export ARA=false 2026-04-16 05:11:05.647220 | orchestrator | ++ ARA=false 2026-04-16 05:11:05.647231 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 05:11:05.647242 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 05:11:05.647253 | orchestrator | ++ export TEMPEST=false 2026-04-16 05:11:05.647267 | orchestrator | ++ TEMPEST=false 2026-04-16 05:11:05.647279 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 05:11:05.647289 | orchestrator | ++ IS_ZUUL=true 2026-04-16 05:11:05.647300 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 05:11:05.647311 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 05:11:05.647322 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 05:11:05.647333 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 05:11:05.647343 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 05:11:05.647354 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 05:11:05.647375 | orchestrator | 2026-04-16 05:11:05.647386 | orchestrator | # PULL IMAGES 2026-04-16 05:11:05.647397 | orchestrator | 2026-04-16 05:11:05.647431 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 05:11:05.647443 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 05:11:05.647454 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 05:11:05.647465 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 05:11:05.647476 | orchestrator | + echo 2026-04-16 05:11:05.647487 | orchestrator | + echo '# PULL IMAGES' 2026-04-16 05:11:05.647498 | orchestrator | + echo 2026-04-16 05:11:05.648175 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-16 05:11:05.693486 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 05:11:05.693580 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-16 05:11:07.381158 | orchestrator | 2026-04-16 05:11:07 | INFO  | Trying to run play pull-images in environment custom 2026-04-16 05:11:17.481537 | orchestrator | 2026-04-16 05:11:17 | INFO  | Task 6b24d84f-ddf5-4c41-a9d8-debde869207d (pull-images) was prepared for execution. 2026-04-16 05:11:17.481647 | orchestrator | 2026-04-16 05:11:17 | INFO  | Task 6b24d84f-ddf5-4c41-a9d8-debde869207d is running in background. No more output. Check ARA for logs. 2026-04-16 05:11:17.716044 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-04-16 05:11:29.788693 | orchestrator | 2026-04-16 05:11:29 | INFO  | Task d3e6ca8c-6f34-49f0-9463-18db689ad421 (cgit) was prepared for execution. 2026-04-16 05:11:29.788792 | orchestrator | 2026-04-16 05:11:29 | INFO  | Task d3e6ca8c-6f34-49f0-9463-18db689ad421 is running in background. No more output. Check ARA for logs. 2026-04-16 05:11:41.745897 | orchestrator | 2026-04-16 05:11:41 | INFO  | Task 04abb501-33df-41f3-bed8-5d97803dc4f4 (dotfiles) was prepared for execution. 2026-04-16 05:11:41.746073 | orchestrator | 2026-04-16 05:11:41 | INFO  | Task 04abb501-33df-41f3-bed8-5d97803dc4f4 is running in background. No more output. Check ARA for logs. 2026-04-16 05:11:53.654299 | orchestrator | 2026-04-16 05:11:53 | INFO  | Task 35f28769-16e0-4e64-9707-d1f3a2f2fc57 (homer) was prepared for execution. 2026-04-16 05:11:53.654415 | orchestrator | 2026-04-16 05:11:53 | INFO  | Task 35f28769-16e0-4e64-9707-d1f3a2f2fc57 is running in background. No more output. Check ARA for logs. 2026-04-16 05:12:05.979904 | orchestrator | 2026-04-16 05:12:05 | INFO  | Task 20454260-6c33-4f67-9eed-1cb3bdcd5f57 (phpmyadmin) was prepared for execution. 2026-04-16 05:12:05.980005 | orchestrator | 2026-04-16 05:12:05 | INFO  | Task 20454260-6c33-4f67-9eed-1cb3bdcd5f57 is running in background. No more output. Check ARA for logs. 2026-04-16 05:12:18.269384 | orchestrator | 2026-04-16 05:12:18 | INFO  | Task 7fba20e6-69ec-4d5f-99eb-bb799494157a (sosreport) was prepared for execution. 2026-04-16 05:12:18.269499 | orchestrator | 2026-04-16 05:12:18 | INFO  | Task 7fba20e6-69ec-4d5f-99eb-bb799494157a is running in background. No more output. Check ARA for logs. 2026-04-16 05:12:18.599499 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-04-16 05:12:18.607509 | orchestrator | + set -e 2026-04-16 05:12:18.607571 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 05:12:18.607586 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 05:12:18.607624 | orchestrator | ++ INTERACTIVE=false 2026-04-16 05:12:18.607638 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 05:12:18.607650 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 05:12:18.607661 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 05:12:18.607672 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 05:12:18.607683 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 05:12:18.607694 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 05:12:18.607705 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 05:12:18.607716 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 05:12:18.607727 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 05:12:18.607738 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 05:12:18.607749 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 05:12:18.607760 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 05:12:18.607771 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 05:12:18.607782 | orchestrator | ++ export ARA=false 2026-04-16 05:12:18.607793 | orchestrator | ++ ARA=false 2026-04-16 05:12:18.607804 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 05:12:18.607859 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 05:12:18.607879 | orchestrator | ++ export TEMPEST=false 2026-04-16 05:12:18.607899 | orchestrator | ++ TEMPEST=false 2026-04-16 05:12:18.607915 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 05:12:18.607933 | orchestrator | ++ IS_ZUUL=true 2026-04-16 05:12:18.607973 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 05:12:18.608001 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 05:12:18.608022 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 05:12:18.608041 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 05:12:18.608058 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 05:12:18.608077 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 05:12:18.608097 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 05:12:18.608116 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 05:12:18.608135 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 05:12:18.608149 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 05:12:18.608172 | orchestrator | ++ semver 9.5.0 8.0.3 2026-04-16 05:12:18.658748 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 05:12:18.658850 | orchestrator | + osism apply frr 2026-04-16 05:12:31.373510 | orchestrator | 2026-04-16 05:12:31 | INFO  | Task 7a26f7da-5abd-4984-9b5e-97bce7ce7660 (frr) was prepared for execution. 2026-04-16 05:12:31.373690 | orchestrator | 2026-04-16 05:12:31 | INFO  | It takes a moment until task 7a26f7da-5abd-4984-9b5e-97bce7ce7660 (frr) has been started and output is visible here. 2026-04-16 05:12:56.454186 | orchestrator | 2026-04-16 05:12:56.454293 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-16 05:12:56.454309 | orchestrator | 2026-04-16 05:12:56.454317 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-16 05:12:56.454328 | orchestrator | Thursday 16 April 2026 05:12:35 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-16 05:12:56.454335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 05:12:56.454345 | orchestrator | 2026-04-16 05:12:56.454353 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-16 05:12:56.454360 | orchestrator | Thursday 16 April 2026 05:12:35 +0000 (0:00:00.214) 0:00:00.401 ******** 2026-04-16 05:12:56.454367 | orchestrator | changed: [testbed-manager] 2026-04-16 05:12:56.454375 | orchestrator | 2026-04-16 05:12:56.454381 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-16 05:12:56.454394 | orchestrator | Thursday 16 April 2026 05:12:36 +0000 (0:00:00.928) 0:00:01.329 ******** 2026-04-16 05:12:56.454401 | orchestrator | changed: [testbed-manager] 2026-04-16 05:12:56.454407 | orchestrator | 2026-04-16 05:12:56.454414 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-16 05:12:56.454420 | orchestrator | Thursday 16 April 2026 05:12:46 +0000 (0:00:09.866) 0:00:11.196 ******** 2026-04-16 05:12:56.454427 | orchestrator | ok: [testbed-manager] 2026-04-16 05:12:56.454434 | orchestrator | 2026-04-16 05:12:56.454441 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-16 05:12:56.454449 | orchestrator | Thursday 16 April 2026 05:12:47 +0000 (0:00:00.850) 0:00:12.046 ******** 2026-04-16 05:12:56.454456 | orchestrator | changed: [testbed-manager] 2026-04-16 05:12:56.454463 | orchestrator | 2026-04-16 05:12:56.454469 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-16 05:12:56.454476 | orchestrator | Thursday 16 April 2026 05:12:48 +0000 (0:00:00.852) 0:00:12.899 ******** 2026-04-16 05:12:56.454482 | orchestrator | ok: [testbed-manager] 2026-04-16 05:12:56.454489 | orchestrator | 2026-04-16 05:12:56.454497 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-16 05:12:56.454505 | orchestrator | Thursday 16 April 2026 05:12:49 +0000 (0:00:01.111) 0:00:14.010 ******** 2026-04-16 05:12:56.454511 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:12:56.454518 | orchestrator | 2026-04-16 05:12:56.454526 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-16 05:12:56.454533 | orchestrator | Thursday 16 April 2026 05:12:49 +0000 (0:00:00.127) 0:00:14.137 ******** 2026-04-16 05:12:56.454602 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:12:56.454613 | orchestrator | 2026-04-16 05:12:56.454619 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-16 05:12:56.454626 | orchestrator | Thursday 16 April 2026 05:12:49 +0000 (0:00:00.123) 0:00:14.261 ******** 2026-04-16 05:12:56.454632 | orchestrator | changed: [testbed-manager] 2026-04-16 05:12:56.454638 | orchestrator | 2026-04-16 05:12:56.454645 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-16 05:12:56.454651 | orchestrator | Thursday 16 April 2026 05:12:50 +0000 (0:00:00.900) 0:00:15.161 ******** 2026-04-16 05:12:56.454657 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-16 05:12:56.454665 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-16 05:12:56.454672 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-16 05:12:56.454679 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-16 05:12:56.454686 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-16 05:12:56.454692 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-16 05:12:56.454698 | orchestrator | 2026-04-16 05:12:56.454704 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-16 05:12:56.454711 | orchestrator | Thursday 16 April 2026 05:12:53 +0000 (0:00:02.963) 0:00:18.125 ******** 2026-04-16 05:12:56.454717 | orchestrator | ok: [testbed-manager] 2026-04-16 05:12:56.454724 | orchestrator | 2026-04-16 05:12:56.454730 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-04-16 05:12:56.454736 | orchestrator | Thursday 16 April 2026 05:12:54 +0000 (0:00:01.329) 0:00:19.455 ******** 2026-04-16 05:12:56.454742 | orchestrator | changed: [testbed-manager] 2026-04-16 05:12:56.454748 | orchestrator | 2026-04-16 05:12:56.454755 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:12:56.454762 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:12:56.454769 | orchestrator | 2026-04-16 05:12:56.454779 | orchestrator | 2026-04-16 05:12:56.454793 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:12:56.454799 | orchestrator | Thursday 16 April 2026 05:12:56 +0000 (0:00:01.301) 0:00:20.756 ******** 2026-04-16 05:12:56.454806 | orchestrator | =============================================================================== 2026-04-16 05:12:56.454812 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.87s 2026-04-16 05:12:56.454818 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.96s 2026-04-16 05:12:56.454824 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.33s 2026-04-16 05:12:56.454831 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.30s 2026-04-16 05:12:56.454837 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2026-04-16 05:12:56.454861 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 0.93s 2026-04-16 05:12:56.454867 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.90s 2026-04-16 05:12:56.454875 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.85s 2026-04-16 05:12:56.454880 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.85s 2026-04-16 05:12:56.454887 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.21s 2026-04-16 05:12:56.454892 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-04-16 05:12:56.454898 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.12s 2026-04-16 05:12:56.655080 | orchestrator | + osism apply kubernetes 2026-04-16 05:12:58.400788 | orchestrator | 2026-04-16 05:12:58 | INFO  | Task 5404918f-a592-4138-9631-cc910f04bce0 (kubernetes) was prepared for execution. 2026-04-16 05:12:58.400866 | orchestrator | 2026-04-16 05:12:58 | INFO  | It takes a moment until task 5404918f-a592-4138-9631-cc910f04bce0 (kubernetes) has been started and output is visible here. 2026-04-16 05:13:18.706468 | orchestrator | 2026-04-16 05:13:18.706616 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-16 05:13:18.706630 | orchestrator | 2026-04-16 05:13:18.706638 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-16 05:13:18.706646 | orchestrator | Thursday 16 April 2026 05:13:01 +0000 (0:00:00.134) 0:00:00.134 ******** 2026-04-16 05:13:18.706652 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:13:18.706660 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:13:18.706668 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:13:18.706676 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:13:18.706683 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:13:18.706689 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:13:18.706695 | orchestrator | 2026-04-16 05:13:18.706702 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-16 05:13:18.706708 | orchestrator | Thursday 16 April 2026 05:13:02 +0000 (0:00:00.549) 0:00:00.683 ******** 2026-04-16 05:13:18.706715 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.706722 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.706729 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.706736 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.706742 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.706749 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.706756 | orchestrator | 2026-04-16 05:13:18.706762 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-16 05:13:18.706771 | orchestrator | Thursday 16 April 2026 05:13:02 +0000 (0:00:00.491) 0:00:01.175 ******** 2026-04-16 05:13:18.706778 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.706784 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.706791 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.706797 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.706803 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.706809 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.706815 | orchestrator | 2026-04-16 05:13:18.706822 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-16 05:13:18.706827 | orchestrator | Thursday 16 April 2026 05:13:03 +0000 (0:00:00.550) 0:00:01.725 ******** 2026-04-16 05:13:18.706834 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:13:18.706840 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:13:18.706846 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:13:18.706855 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:13:18.706861 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:13:18.706867 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:13:18.706873 | orchestrator | 2026-04-16 05:13:18.706879 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-16 05:13:18.706885 | orchestrator | Thursday 16 April 2026 05:13:04 +0000 (0:00:01.440) 0:00:03.166 ******** 2026-04-16 05:13:18.706891 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:13:18.706898 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:13:18.706904 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:13:18.706911 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:13:18.706918 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:13:18.706924 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:13:18.706930 | orchestrator | 2026-04-16 05:13:18.706937 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-16 05:13:18.706943 | orchestrator | Thursday 16 April 2026 05:13:06 +0000 (0:00:01.283) 0:00:04.449 ******** 2026-04-16 05:13:18.706950 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:13:18.706975 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:13:18.706980 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:13:18.706983 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:13:18.706987 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:13:18.706991 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:13:18.706994 | orchestrator | 2026-04-16 05:13:18.707005 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-16 05:13:18.707009 | orchestrator | Thursday 16 April 2026 05:13:07 +0000 (0:00:00.925) 0:00:05.375 ******** 2026-04-16 05:13:18.707016 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707022 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707029 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707034 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707040 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707047 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707054 | orchestrator | 2026-04-16 05:13:18.707061 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-16 05:13:18.707068 | orchestrator | Thursday 16 April 2026 05:13:07 +0000 (0:00:00.673) 0:00:06.048 ******** 2026-04-16 05:13:18.707075 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707082 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707089 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707096 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707103 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707110 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707117 | orchestrator | 2026-04-16 05:13:18.707124 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-16 05:13:18.707131 | orchestrator | Thursday 16 April 2026 05:13:08 +0000 (0:00:00.477) 0:00:06.526 ******** 2026-04-16 05:13:18.707138 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 05:13:18.707145 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 05:13:18.707153 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707164 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 05:13:18.707176 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 05:13:18.707186 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707193 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 05:13:18.707201 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 05:13:18.707207 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707213 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 05:13:18.707238 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 05:13:18.707245 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707250 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 05:13:18.707257 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 05:13:18.707270 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707284 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 05:13:18.707300 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 05:13:18.707313 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707325 | orchestrator | 2026-04-16 05:13:18.707338 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-16 05:13:18.707349 | orchestrator | Thursday 16 April 2026 05:13:08 +0000 (0:00:00.486) 0:00:07.012 ******** 2026-04-16 05:13:18.707361 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707373 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707385 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707407 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707416 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707425 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707434 | orchestrator | 2026-04-16 05:13:18.707442 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-16 05:13:18.707453 | orchestrator | Thursday 16 April 2026 05:13:09 +0000 (0:00:01.037) 0:00:08.050 ******** 2026-04-16 05:13:18.707462 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:13:18.707472 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:13:18.707480 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:13:18.707489 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:13:18.707497 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:13:18.707506 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:13:18.707515 | orchestrator | 2026-04-16 05:13:18.707524 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-16 05:13:18.707533 | orchestrator | Thursday 16 April 2026 05:13:10 +0000 (0:00:00.731) 0:00:08.781 ******** 2026-04-16 05:13:18.707564 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:13:18.707575 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:13:18.707584 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:13:18.707595 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:13:18.707605 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:13:18.707614 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:13:18.707624 | orchestrator | 2026-04-16 05:13:18.707635 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-16 05:13:18.707644 | orchestrator | Thursday 16 April 2026 05:13:15 +0000 (0:00:05.520) 0:00:14.302 ******** 2026-04-16 05:13:18.707653 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707672 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707680 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707689 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707699 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707708 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707718 | orchestrator | 2026-04-16 05:13:18.707727 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-16 05:13:18.707736 | orchestrator | Thursday 16 April 2026 05:13:16 +0000 (0:00:00.644) 0:00:14.946 ******** 2026-04-16 05:13:18.707745 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707754 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707763 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707772 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707781 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707788 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707795 | orchestrator | 2026-04-16 05:13:18.707803 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-16 05:13:18.707814 | orchestrator | Thursday 16 April 2026 05:13:17 +0000 (0:00:00.923) 0:00:15.870 ******** 2026-04-16 05:13:18.707823 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707832 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707840 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707848 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.707856 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.707863 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.707871 | orchestrator | 2026-04-16 05:13:18.707878 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-16 05:13:18.707886 | orchestrator | Thursday 16 April 2026 05:13:18 +0000 (0:00:00.500) 0:00:16.370 ******** 2026-04-16 05:13:18.707894 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-16 05:13:18.707910 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-16 05:13:18.707919 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:13:18.707927 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-16 05:13:18.707945 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-16 05:13:18.707954 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:13:18.707962 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-16 05:13:18.707970 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-16 05:13:18.707979 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:13:18.707987 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-16 05:13:18.707996 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-16 05:13:18.708004 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:13:18.708012 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-16 05:13:18.708020 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-16 05:13:18.708027 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:13:18.708035 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-16 05:13:18.708043 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-16 05:13:18.708051 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:13:18.708059 | orchestrator | 2026-04-16 05:13:18.708067 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-16 05:13:18.708088 | orchestrator | Thursday 16 April 2026 05:13:18 +0000 (0:00:00.639) 0:00:17.010 ******** 2026-04-16 05:14:31.041055 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:14:31.041149 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:14:31.041160 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:14:31.041168 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:14:31.041176 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041183 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041191 | orchestrator | 2026-04-16 05:14:31.041199 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-16 05:14:31.041208 | orchestrator | Thursday 16 April 2026 05:13:19 +0000 (0:00:00.494) 0:00:17.504 ******** 2026-04-16 05:14:31.041216 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:14:31.041223 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:14:31.041230 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:14:31.041237 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:14:31.041244 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041251 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041259 | orchestrator | 2026-04-16 05:14:31.041266 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-16 05:14:31.041273 | orchestrator | 2026-04-16 05:14:31.041281 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-16 05:14:31.041294 | orchestrator | Thursday 16 April 2026 05:13:20 +0000 (0:00:01.042) 0:00:18.546 ******** 2026-04-16 05:14:31.041306 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.041318 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.041331 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.041345 | orchestrator | 2026-04-16 05:14:31.041358 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-16 05:14:31.041368 | orchestrator | Thursday 16 April 2026 05:13:21 +0000 (0:00:01.047) 0:00:19.594 ******** 2026-04-16 05:14:31.041376 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.041383 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.041390 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.041397 | orchestrator | 2026-04-16 05:14:31.041404 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-16 05:14:31.041411 | orchestrator | Thursday 16 April 2026 05:13:22 +0000 (0:00:01.070) 0:00:20.664 ******** 2026-04-16 05:14:31.041419 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.041426 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.041433 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.041441 | orchestrator | 2026-04-16 05:14:31.041448 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-16 05:14:31.041455 | orchestrator | Thursday 16 April 2026 05:13:23 +0000 (0:00:00.889) 0:00:21.553 ******** 2026-04-16 05:14:31.041480 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.041488 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.041495 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.041537 | orchestrator | 2026-04-16 05:14:31.041545 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-16 05:14:31.041552 | orchestrator | Thursday 16 April 2026 05:13:23 +0000 (0:00:00.644) 0:00:22.198 ******** 2026-04-16 05:14:31.041559 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:14:31.041567 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041574 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041581 | orchestrator | 2026-04-16 05:14:31.041588 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-16 05:14:31.041611 | orchestrator | Thursday 16 April 2026 05:13:24 +0000 (0:00:00.312) 0:00:22.510 ******** 2026-04-16 05:14:31.041620 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:14:31.041628 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:14:31.041636 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:14:31.041644 | orchestrator | 2026-04-16 05:14:31.041653 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-16 05:14:31.041662 | orchestrator | Thursday 16 April 2026 05:13:25 +0000 (0:00:00.855) 0:00:23.366 ******** 2026-04-16 05:14:31.041670 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:14:31.041678 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:14:31.041687 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:14:31.041695 | orchestrator | 2026-04-16 05:14:31.041703 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-16 05:14:31.041712 | orchestrator | Thursday 16 April 2026 05:13:26 +0000 (0:00:01.220) 0:00:24.586 ******** 2026-04-16 05:14:31.041720 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:14:31.041729 | orchestrator | 2026-04-16 05:14:31.041738 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-16 05:14:31.041747 | orchestrator | Thursday 16 April 2026 05:13:26 +0000 (0:00:00.490) 0:00:25.077 ******** 2026-04-16 05:14:31.041755 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.041763 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.041770 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.041777 | orchestrator | 2026-04-16 05:14:31.041784 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-16 05:14:31.041792 | orchestrator | Thursday 16 April 2026 05:13:28 +0000 (0:00:01.258) 0:00:26.336 ******** 2026-04-16 05:14:31.041799 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041806 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041813 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:14:31.041820 | orchestrator | 2026-04-16 05:14:31.041828 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-16 05:14:31.041835 | orchestrator | Thursday 16 April 2026 05:13:28 +0000 (0:00:00.510) 0:00:26.846 ******** 2026-04-16 05:14:31.041842 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041849 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041858 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:14:31.041866 | orchestrator | 2026-04-16 05:14:31.041875 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-16 05:14:31.041883 | orchestrator | Thursday 16 April 2026 05:13:29 +0000 (0:00:01.024) 0:00:27.871 ******** 2026-04-16 05:14:31.041892 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041900 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041909 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:14:31.041917 | orchestrator | 2026-04-16 05:14:31.041926 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-16 05:14:31.041950 | orchestrator | Thursday 16 April 2026 05:13:30 +0000 (0:00:01.194) 0:00:29.065 ******** 2026-04-16 05:14:31.041960 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:14:31.041976 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.041985 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.041994 | orchestrator | 2026-04-16 05:14:31.042002 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-16 05:14:31.042011 | orchestrator | Thursday 16 April 2026 05:13:31 +0000 (0:00:00.522) 0:00:29.588 ******** 2026-04-16 05:14:31.042083 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:14:31.042093 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.042102 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.042110 | orchestrator | 2026-04-16 05:14:31.042119 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-16 05:14:31.042128 | orchestrator | Thursday 16 April 2026 05:13:31 +0000 (0:00:00.285) 0:00:29.874 ******** 2026-04-16 05:14:31.042137 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:14:31.042145 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:14:31.042153 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:14:31.042162 | orchestrator | 2026-04-16 05:14:31.042177 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-16 05:14:31.042186 | orchestrator | Thursday 16 April 2026 05:13:32 +0000 (0:00:01.016) 0:00:30.890 ******** 2026-04-16 05:14:31.042195 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.042204 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.042212 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.042221 | orchestrator | 2026-04-16 05:14:31.042229 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-16 05:14:31.042238 | orchestrator | Thursday 16 April 2026 05:13:35 +0000 (0:00:03.034) 0:00:33.925 ******** 2026-04-16 05:14:31.042247 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.042255 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.042264 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.042276 | orchestrator | 2026-04-16 05:14:31.042285 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-16 05:14:31.042294 | orchestrator | Thursday 16 April 2026 05:13:35 +0000 (0:00:00.356) 0:00:34.281 ******** 2026-04-16 05:14:31.042303 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-16 05:14:31.042314 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-16 05:14:31.042323 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-16 05:14:31.042331 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-16 05:14:31.042340 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-16 05:14:31.042349 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-16 05:14:31.042358 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-16 05:14:31.042366 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-16 05:14:31.042375 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-16 05:14:31.042383 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-16 05:14:31.042392 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-16 05:14:31.042407 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-04-16 05:14:31.042416 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-16 05:14:31.042424 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-16 05:14:31.042433 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-04-16 05:14:31.042442 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:14:31.042450 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:14:31.042459 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:14:31.042467 | orchestrator | 2026-04-16 05:14:31.042481 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-16 05:14:31.042490 | orchestrator | Thursday 16 April 2026 05:14:29 +0000 (0:00:53.818) 0:01:28.099 ******** 2026-04-16 05:14:31.042516 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:14:31.042525 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:14:31.042534 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:14:31.042542 | orchestrator | 2026-04-16 05:14:31.042551 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-16 05:14:31.042560 | orchestrator | Thursday 16 April 2026 05:14:30 +0000 (0:00:00.289) 0:01:28.388 ******** 2026-04-16 05:14:31.042575 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.182935 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183015 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183021 | orchestrator | 2026-04-16 05:15:10.183027 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-16 05:15:10.183033 | orchestrator | Thursday 16 April 2026 05:14:31 +0000 (0:00:00.964) 0:01:29.352 ******** 2026-04-16 05:15:10.183037 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183041 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183045 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183049 | orchestrator | 2026-04-16 05:15:10.183053 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-16 05:15:10.183057 | orchestrator | Thursday 16 April 2026 05:14:32 +0000 (0:00:01.138) 0:01:30.491 ******** 2026-04-16 05:15:10.183061 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183064 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183068 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183072 | orchestrator | 2026-04-16 05:15:10.183078 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-16 05:15:10.183084 | orchestrator | Thursday 16 April 2026 05:14:56 +0000 (0:00:24.226) 0:01:54.717 ******** 2026-04-16 05:15:10.183090 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:15:10.183096 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:15:10.183102 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:15:10.183109 | orchestrator | 2026-04-16 05:15:10.183115 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-16 05:15:10.183122 | orchestrator | Thursday 16 April 2026 05:14:57 +0000 (0:00:00.623) 0:01:55.341 ******** 2026-04-16 05:15:10.183128 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:15:10.183134 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:15:10.183141 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:15:10.183147 | orchestrator | 2026-04-16 05:15:10.183153 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-16 05:15:10.183160 | orchestrator | Thursday 16 April 2026 05:14:57 +0000 (0:00:00.614) 0:01:55.955 ******** 2026-04-16 05:15:10.183167 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183173 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183179 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183186 | orchestrator | 2026-04-16 05:15:10.183192 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-16 05:15:10.183216 | orchestrator | Thursday 16 April 2026 05:14:58 +0000 (0:00:00.616) 0:01:56.572 ******** 2026-04-16 05:15:10.183223 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:15:10.183229 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:15:10.183236 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:15:10.183242 | orchestrator | 2026-04-16 05:15:10.183248 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-16 05:15:10.183255 | orchestrator | Thursday 16 April 2026 05:14:59 +0000 (0:00:00.763) 0:01:57.336 ******** 2026-04-16 05:15:10.183261 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:15:10.183268 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:15:10.183272 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:15:10.183276 | orchestrator | 2026-04-16 05:15:10.183279 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-16 05:15:10.183283 | orchestrator | Thursday 16 April 2026 05:14:59 +0000 (0:00:00.296) 0:01:57.632 ******** 2026-04-16 05:15:10.183287 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183291 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183295 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183298 | orchestrator | 2026-04-16 05:15:10.183303 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-16 05:15:10.183308 | orchestrator | Thursday 16 April 2026 05:14:59 +0000 (0:00:00.605) 0:01:58.237 ******** 2026-04-16 05:15:10.183314 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183320 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183326 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183332 | orchestrator | 2026-04-16 05:15:10.183338 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-16 05:15:10.183344 | orchestrator | Thursday 16 April 2026 05:15:00 +0000 (0:00:00.590) 0:01:58.827 ******** 2026-04-16 05:15:10.183350 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183357 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183362 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183369 | orchestrator | 2026-04-16 05:15:10.183375 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-16 05:15:10.183379 | orchestrator | Thursday 16 April 2026 05:15:01 +0000 (0:00:01.052) 0:01:59.880 ******** 2026-04-16 05:15:10.183385 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:15:10.183391 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:15:10.183398 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:15:10.183403 | orchestrator | 2026-04-16 05:15:10.183409 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-16 05:15:10.183415 | orchestrator | Thursday 16 April 2026 05:15:02 +0000 (0:00:00.804) 0:02:00.685 ******** 2026-04-16 05:15:10.183421 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:15:10.183426 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:15:10.183432 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:15:10.183438 | orchestrator | 2026-04-16 05:15:10.183444 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-16 05:15:10.183450 | orchestrator | Thursday 16 April 2026 05:15:02 +0000 (0:00:00.286) 0:02:00.971 ******** 2026-04-16 05:15:10.183456 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:15:10.183462 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:15:10.183467 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:15:10.183474 | orchestrator | 2026-04-16 05:15:10.183514 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-16 05:15:10.183522 | orchestrator | Thursday 16 April 2026 05:15:02 +0000 (0:00:00.268) 0:02:01.239 ******** 2026-04-16 05:15:10.183528 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:15:10.183535 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:15:10.183541 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:15:10.183547 | orchestrator | 2026-04-16 05:15:10.183553 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-16 05:15:10.183559 | orchestrator | Thursday 16 April 2026 05:15:03 +0000 (0:00:00.616) 0:02:01.856 ******** 2026-04-16 05:15:10.183572 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:15:10.183578 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:15:10.183599 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:15:10.183607 | orchestrator | 2026-04-16 05:15:10.183614 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-16 05:15:10.183623 | orchestrator | Thursday 16 April 2026 05:15:04 +0000 (0:00:00.831) 0:02:02.687 ******** 2026-04-16 05:15:10.183629 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-16 05:15:10.183636 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-16 05:15:10.183643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-16 05:15:10.183649 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-16 05:15:10.183656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-16 05:15:10.183662 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-16 05:15:10.183668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-16 05:15:10.183675 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-16 05:15:10.183682 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-16 05:15:10.183688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-16 05:15:10.183695 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-16 05:15:10.183702 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-16 05:15:10.183708 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-16 05:15:10.183717 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-16 05:15:10.183727 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-16 05:15:10.183733 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-16 05:15:10.183739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-16 05:15:10.183745 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-16 05:15:10.183752 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-16 05:15:10.183758 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-16 05:15:10.183764 | orchestrator | 2026-04-16 05:15:10.183771 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-16 05:15:10.183777 | orchestrator | 2026-04-16 05:15:10.183783 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-16 05:15:10.183790 | orchestrator | Thursday 16 April 2026 05:15:07 +0000 (0:00:02.892) 0:02:05.580 ******** 2026-04-16 05:15:10.183796 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:15:10.183802 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:15:10.183808 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:15:10.183814 | orchestrator | 2026-04-16 05:15:10.183834 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-16 05:15:10.183841 | orchestrator | Thursday 16 April 2026 05:15:07 +0000 (0:00:00.319) 0:02:05.899 ******** 2026-04-16 05:15:10.183847 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:15:10.183853 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:15:10.183859 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:15:10.183870 | orchestrator | 2026-04-16 05:15:10.183877 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-16 05:15:10.183883 | orchestrator | Thursday 16 April 2026 05:15:08 +0000 (0:00:00.898) 0:02:06.797 ******** 2026-04-16 05:15:10.183889 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:15:10.183896 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:15:10.183901 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:15:10.183907 | orchestrator | 2026-04-16 05:15:10.183913 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-16 05:15:10.183920 | orchestrator | Thursday 16 April 2026 05:15:08 +0000 (0:00:00.310) 0:02:07.107 ******** 2026-04-16 05:15:10.183926 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:15:10.183933 | orchestrator | 2026-04-16 05:15:10.183939 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-16 05:15:10.183945 | orchestrator | Thursday 16 April 2026 05:15:09 +0000 (0:00:00.447) 0:02:07.555 ******** 2026-04-16 05:15:10.183952 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:15:10.183958 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:15:10.183964 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:15:10.183970 | orchestrator | 2026-04-16 05:15:10.183976 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-16 05:15:10.183982 | orchestrator | Thursday 16 April 2026 05:15:09 +0000 (0:00:00.459) 0:02:08.014 ******** 2026-04-16 05:15:10.183988 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:15:10.183994 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:15:10.184000 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:15:10.184005 | orchestrator | 2026-04-16 05:15:10.184012 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-16 05:15:10.184017 | orchestrator | Thursday 16 April 2026 05:15:10 +0000 (0:00:00.313) 0:02:08.327 ******** 2026-04-16 05:15:10.184028 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:16:46.555429 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:16:46.555598 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:16:46.555628 | orchestrator | 2026-04-16 05:16:46.555647 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-16 05:16:46.555667 | orchestrator | Thursday 16 April 2026 05:15:10 +0000 (0:00:00.296) 0:02:08.624 ******** 2026-04-16 05:16:46.555684 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:16:46.555703 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:16:46.555720 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:16:46.555739 | orchestrator | 2026-04-16 05:16:46.555757 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-16 05:16:46.555776 | orchestrator | Thursday 16 April 2026 05:15:10 +0000 (0:00:00.646) 0:02:09.271 ******** 2026-04-16 05:16:46.555796 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:16:46.555815 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:16:46.555833 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:16:46.555850 | orchestrator | 2026-04-16 05:16:46.555868 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-16 05:16:46.555887 | orchestrator | Thursday 16 April 2026 05:15:12 +0000 (0:00:01.362) 0:02:10.634 ******** 2026-04-16 05:16:46.555904 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:16:46.555923 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:16:46.555941 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:16:46.555959 | orchestrator | 2026-04-16 05:16:46.555973 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-16 05:16:46.555987 | orchestrator | Thursday 16 April 2026 05:15:13 +0000 (0:00:01.209) 0:02:11.843 ******** 2026-04-16 05:16:46.556000 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:16:46.556019 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:16:46.556046 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:16:46.556066 | orchestrator | 2026-04-16 05:16:46.556085 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-16 05:16:46.556136 | orchestrator | 2026-04-16 05:16:46.556155 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-16 05:16:46.556205 | orchestrator | Thursday 16 April 2026 05:15:23 +0000 (0:00:10.170) 0:02:22.014 ******** 2026-04-16 05:16:46.556240 | orchestrator | ok: [testbed-manager] 2026-04-16 05:16:46.556261 | orchestrator | 2026-04-16 05:16:46.556273 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-16 05:16:46.556284 | orchestrator | Thursday 16 April 2026 05:15:24 +0000 (0:00:00.995) 0:02:23.009 ******** 2026-04-16 05:16:46.556294 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.556306 | orchestrator | 2026-04-16 05:16:46.556316 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-16 05:16:46.556327 | orchestrator | Thursday 16 April 2026 05:15:25 +0000 (0:00:00.495) 0:02:23.505 ******** 2026-04-16 05:16:46.556338 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-16 05:16:46.556349 | orchestrator | 2026-04-16 05:16:46.556359 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-16 05:16:46.556370 | orchestrator | Thursday 16 April 2026 05:15:25 +0000 (0:00:00.529) 0:02:24.035 ******** 2026-04-16 05:16:46.556388 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.556407 | orchestrator | 2026-04-16 05:16:46.556423 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-16 05:16:46.556439 | orchestrator | Thursday 16 April 2026 05:15:26 +0000 (0:00:00.845) 0:02:24.880 ******** 2026-04-16 05:16:46.556523 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.556541 | orchestrator | 2026-04-16 05:16:46.556558 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-16 05:16:46.556576 | orchestrator | Thursday 16 April 2026 05:15:27 +0000 (0:00:00.581) 0:02:25.462 ******** 2026-04-16 05:16:46.556596 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-16 05:16:46.556615 | orchestrator | 2026-04-16 05:16:46.556633 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-16 05:16:46.556650 | orchestrator | Thursday 16 April 2026 05:15:28 +0000 (0:00:01.509) 0:02:26.972 ******** 2026-04-16 05:16:46.556661 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-16 05:16:46.556672 | orchestrator | 2026-04-16 05:16:46.556703 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-16 05:16:46.556720 | orchestrator | Thursday 16 April 2026 05:15:29 +0000 (0:00:00.820) 0:02:27.792 ******** 2026-04-16 05:16:46.556731 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.556741 | orchestrator | 2026-04-16 05:16:46.556752 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-16 05:16:46.556763 | orchestrator | Thursday 16 April 2026 05:15:29 +0000 (0:00:00.436) 0:02:28.228 ******** 2026-04-16 05:16:46.556774 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.556785 | orchestrator | 2026-04-16 05:16:46.556795 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-16 05:16:46.556806 | orchestrator | 2026-04-16 05:16:46.556825 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-16 05:16:46.556854 | orchestrator | Thursday 16 April 2026 05:15:30 +0000 (0:00:00.439) 0:02:28.667 ******** 2026-04-16 05:16:46.556873 | orchestrator | ok: [testbed-manager] 2026-04-16 05:16:46.556890 | orchestrator | 2026-04-16 05:16:46.556908 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-16 05:16:46.556926 | orchestrator | Thursday 16 April 2026 05:15:30 +0000 (0:00:00.340) 0:02:29.008 ******** 2026-04-16 05:16:46.556944 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 05:16:46.556962 | orchestrator | 2026-04-16 05:16:46.556979 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-16 05:16:46.556997 | orchestrator | Thursday 16 April 2026 05:15:30 +0000 (0:00:00.217) 0:02:29.226 ******** 2026-04-16 05:16:46.557016 | orchestrator | ok: [testbed-manager] 2026-04-16 05:16:46.557035 | orchestrator | 2026-04-16 05:16:46.557070 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-16 05:16:46.557082 | orchestrator | Thursday 16 April 2026 05:15:31 +0000 (0:00:00.800) 0:02:30.026 ******** 2026-04-16 05:16:46.557092 | orchestrator | ok: [testbed-manager] 2026-04-16 05:16:46.557103 | orchestrator | 2026-04-16 05:16:46.557137 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-16 05:16:46.557149 | orchestrator | Thursday 16 April 2026 05:15:33 +0000 (0:00:01.585) 0:02:31.612 ******** 2026-04-16 05:16:46.557159 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.557170 | orchestrator | 2026-04-16 05:16:46.557181 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-16 05:16:46.557191 | orchestrator | Thursday 16 April 2026 05:15:34 +0000 (0:00:00.829) 0:02:32.441 ******** 2026-04-16 05:16:46.557202 | orchestrator | ok: [testbed-manager] 2026-04-16 05:16:46.557212 | orchestrator | 2026-04-16 05:16:46.557223 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-16 05:16:46.557237 | orchestrator | Thursday 16 April 2026 05:15:34 +0000 (0:00:00.504) 0:02:32.945 ******** 2026-04-16 05:16:46.557256 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.557275 | orchestrator | 2026-04-16 05:16:46.557292 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-16 05:16:46.557310 | orchestrator | Thursday 16 April 2026 05:15:41 +0000 (0:00:07.305) 0:02:40.251 ******** 2026-04-16 05:16:46.557321 | orchestrator | changed: [testbed-manager] 2026-04-16 05:16:46.557338 | orchestrator | 2026-04-16 05:16:46.557365 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-16 05:16:46.557386 | orchestrator | Thursday 16 April 2026 05:15:53 +0000 (0:00:11.676) 0:02:51.928 ******** 2026-04-16 05:16:46.557404 | orchestrator | ok: [testbed-manager] 2026-04-16 05:16:46.557421 | orchestrator | 2026-04-16 05:16:46.557439 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-16 05:16:46.557489 | orchestrator | 2026-04-16 05:16:46.557507 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-16 05:16:46.557527 | orchestrator | Thursday 16 April 2026 05:15:54 +0000 (0:00:00.714) 0:02:52.642 ******** 2026-04-16 05:16:46.557545 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:16:46.557564 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:16:46.557574 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:16:46.557585 | orchestrator | 2026-04-16 05:16:46.557596 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-16 05:16:46.557607 | orchestrator | Thursday 16 April 2026 05:15:54 +0000 (0:00:00.301) 0:02:52.944 ******** 2026-04-16 05:16:46.557617 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:16:46.557628 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:16:46.557639 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:16:46.557649 | orchestrator | 2026-04-16 05:16:46.557660 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-16 05:16:46.557671 | orchestrator | Thursday 16 April 2026 05:15:54 +0000 (0:00:00.306) 0:02:53.250 ******** 2026-04-16 05:16:46.557681 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:16:46.557692 | orchestrator | 2026-04-16 05:16:46.557703 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-16 05:16:46.557713 | orchestrator | Thursday 16 April 2026 05:15:55 +0000 (0:00:00.675) 0:02:53.926 ******** 2026-04-16 05:16:46.557724 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 05:16:46.557735 | orchestrator | 2026-04-16 05:16:46.557746 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-16 05:16:46.557756 | orchestrator | Thursday 16 April 2026 05:15:56 +0000 (0:00:00.785) 0:02:54.711 ******** 2026-04-16 05:16:46.557767 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:16:46.557777 | orchestrator | 2026-04-16 05:16:46.557788 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-16 05:16:46.557799 | orchestrator | Thursday 16 April 2026 05:15:57 +0000 (0:00:00.799) 0:02:55.510 ******** 2026-04-16 05:16:46.557821 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:16:46.557832 | orchestrator | 2026-04-16 05:16:46.557843 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-16 05:16:46.557853 | orchestrator | Thursday 16 April 2026 05:15:57 +0000 (0:00:00.117) 0:02:55.628 ******** 2026-04-16 05:16:46.557872 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:16:46.557889 | orchestrator | 2026-04-16 05:16:46.557907 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-16 05:16:46.557925 | orchestrator | Thursday 16 April 2026 05:15:58 +0000 (0:00:01.001) 0:02:56.629 ******** 2026-04-16 05:16:46.557941 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:16:46.557958 | orchestrator | 2026-04-16 05:16:46.557975 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-16 05:16:46.557992 | orchestrator | Thursday 16 April 2026 05:15:58 +0000 (0:00:00.118) 0:02:56.747 ******** 2026-04-16 05:16:46.558010 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:16:46.558104 | orchestrator | 2026-04-16 05:16:46.558117 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-16 05:16:46.558127 | orchestrator | Thursday 16 April 2026 05:15:58 +0000 (0:00:00.116) 0:02:56.864 ******** 2026-04-16 05:16:46.558138 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:16:46.558148 | orchestrator | 2026-04-16 05:16:46.558159 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-16 05:16:46.558179 | orchestrator | Thursday 16 April 2026 05:15:58 +0000 (0:00:00.123) 0:02:56.988 ******** 2026-04-16 05:16:46.558190 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:16:46.558200 | orchestrator | 2026-04-16 05:16:46.558211 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-16 05:16:46.558222 | orchestrator | Thursday 16 April 2026 05:15:58 +0000 (0:00:00.113) 0:02:57.101 ******** 2026-04-16 05:16:46.558233 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 05:16:46.558243 | orchestrator | 2026-04-16 05:16:46.558254 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-16 05:16:46.558264 | orchestrator | Thursday 16 April 2026 05:16:04 +0000 (0:00:05.316) 0:03:02.417 ******** 2026-04-16 05:16:46.558275 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-16 05:16:46.558286 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-04-16 05:16:46.558315 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-16 05:17:07.612630 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-16 05:17:07.612751 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-16 05:17:07.612767 | orchestrator | 2026-04-16 05:17:07.612781 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-16 05:17:07.612793 | orchestrator | Thursday 16 April 2026 05:16:46 +0000 (0:00:42.446) 0:03:44.863 ******** 2026-04-16 05:17:07.612804 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:17:07.612815 | orchestrator | 2026-04-16 05:17:07.612826 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-16 05:17:07.612838 | orchestrator | Thursday 16 April 2026 05:16:47 +0000 (0:00:01.161) 0:03:46.025 ******** 2026-04-16 05:17:07.612849 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 05:17:07.612860 | orchestrator | 2026-04-16 05:17:07.612870 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-16 05:17:07.612881 | orchestrator | Thursday 16 April 2026 05:16:49 +0000 (0:00:01.650) 0:03:47.675 ******** 2026-04-16 05:17:07.612892 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 05:17:07.612902 | orchestrator | 2026-04-16 05:17:07.612913 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-16 05:17:07.612924 | orchestrator | Thursday 16 April 2026 05:16:50 +0000 (0:00:01.074) 0:03:48.749 ******** 2026-04-16 05:17:07.612958 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:17:07.612970 | orchestrator | 2026-04-16 05:17:07.612980 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-16 05:17:07.612991 | orchestrator | Thursday 16 April 2026 05:16:50 +0000 (0:00:00.104) 0:03:48.854 ******** 2026-04-16 05:17:07.613002 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-16 05:17:07.613013 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-16 05:17:07.613024 | orchestrator | 2026-04-16 05:17:07.613035 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-16 05:17:07.613046 | orchestrator | Thursday 16 April 2026 05:16:52 +0000 (0:00:01.776) 0:03:50.630 ******** 2026-04-16 05:17:07.613056 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:17:07.613067 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:17:07.613077 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:17:07.613088 | orchestrator | 2026-04-16 05:17:07.613101 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-16 05:17:07.613114 | orchestrator | Thursday 16 April 2026 05:16:52 +0000 (0:00:00.302) 0:03:50.932 ******** 2026-04-16 05:17:07.613127 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:17:07.613139 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:17:07.613152 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:17:07.613164 | orchestrator | 2026-04-16 05:17:07.613177 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-16 05:17:07.613190 | orchestrator | 2026-04-16 05:17:07.613202 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-16 05:17:07.613215 | orchestrator | Thursday 16 April 2026 05:16:53 +0000 (0:00:00.804) 0:03:51.737 ******** 2026-04-16 05:17:07.613228 | orchestrator | ok: [testbed-manager] 2026-04-16 05:17:07.613240 | orchestrator | 2026-04-16 05:17:07.613254 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-16 05:17:07.613266 | orchestrator | Thursday 16 April 2026 05:16:53 +0000 (0:00:00.332) 0:03:52.069 ******** 2026-04-16 05:17:07.613279 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 05:17:07.613292 | orchestrator | 2026-04-16 05:17:07.613305 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-16 05:17:07.613317 | orchestrator | Thursday 16 April 2026 05:16:53 +0000 (0:00:00.223) 0:03:52.292 ******** 2026-04-16 05:17:07.613330 | orchestrator | changed: [testbed-manager] 2026-04-16 05:17:07.613342 | orchestrator | 2026-04-16 05:17:07.613355 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-16 05:17:07.613369 | orchestrator | 2026-04-16 05:17:07.613381 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-16 05:17:07.613394 | orchestrator | Thursday 16 April 2026 05:16:58 +0000 (0:00:04.975) 0:03:57.268 ******** 2026-04-16 05:17:07.613407 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:17:07.613419 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:17:07.613432 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:17:07.613464 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:17:07.613475 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:17:07.613486 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:17:07.613497 | orchestrator | 2026-04-16 05:17:07.613508 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-16 05:17:07.613518 | orchestrator | Thursday 16 April 2026 05:16:59 +0000 (0:00:00.733) 0:03:58.001 ******** 2026-04-16 05:17:07.613529 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-16 05:17:07.613540 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-16 05:17:07.613551 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-16 05:17:07.613562 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-16 05:17:07.613581 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-16 05:17:07.613592 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-16 05:17:07.613603 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-16 05:17:07.613613 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-16 05:17:07.613624 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-16 05:17:07.613654 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-16 05:17:07.613666 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-16 05:17:07.613677 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-16 05:17:07.613688 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-16 05:17:07.613698 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-16 05:17:07.613709 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-16 05:17:07.613738 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-16 05:17:07.613749 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-16 05:17:07.613760 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-16 05:17:07.613770 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-16 05:17:07.613781 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-16 05:17:07.613792 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-16 05:17:07.613803 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-16 05:17:07.613813 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-16 05:17:07.613824 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-16 05:17:07.613834 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-16 05:17:07.613845 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-16 05:17:07.613856 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-16 05:17:07.613866 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-16 05:17:07.613877 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-16 05:17:07.613887 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-16 05:17:07.613898 | orchestrator | 2026-04-16 05:17:07.613909 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-16 05:17:07.613920 | orchestrator | Thursday 16 April 2026 05:17:06 +0000 (0:00:06.968) 0:04:04.969 ******** 2026-04-16 05:17:07.613931 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:17:07.613941 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:17:07.613952 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:17:07.613963 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:17:07.613973 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:17:07.613984 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:17:07.613995 | orchestrator | 2026-04-16 05:17:07.614005 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-16 05:17:07.614077 | orchestrator | Thursday 16 April 2026 05:17:07 +0000 (0:00:00.453) 0:04:05.423 ******** 2026-04-16 05:17:07.614092 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:17:07.614111 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:17:07.614122 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:17:07.614168 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:17:07.614180 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:17:07.614191 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:17:07.614202 | orchestrator | 2026-04-16 05:17:07.614212 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:17:07.614223 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:17:07.614237 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-16 05:17:07.614249 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 05:17:07.614260 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 05:17:07.614271 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 05:17:07.614281 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 05:17:07.614292 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 05:17:07.614303 | orchestrator | 2026-04-16 05:17:07.614313 | orchestrator | 2026-04-16 05:17:07.614324 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:17:07.614335 | orchestrator | Thursday 16 April 2026 05:17:07 +0000 (0:00:00.497) 0:04:05.921 ******** 2026-04-16 05:17:07.614355 | orchestrator | =============================================================================== 2026-04-16 05:17:07.817725 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.82s 2026-04-16 05:17:07.817825 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.45s 2026-04-16 05:17:07.817840 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.23s 2026-04-16 05:17:07.817852 | orchestrator | kubectl : Install required packages ------------------------------------ 11.68s 2026-04-16 05:17:07.817862 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.17s 2026-04-16 05:17:07.817873 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.31s 2026-04-16 05:17:07.817884 | orchestrator | Manage labels ----------------------------------------------------------- 6.97s 2026-04-16 05:17:07.817894 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.52s 2026-04-16 05:17:07.817905 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.32s 2026-04-16 05:17:07.817915 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 4.98s 2026-04-16 05:17:07.817926 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 3.03s 2026-04-16 05:17:07.817937 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.89s 2026-04-16 05:17:07.817949 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.78s 2026-04-16 05:17:07.817960 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.65s 2026-04-16 05:17:07.817971 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.59s 2026-04-16 05:17:07.817981 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.51s 2026-04-16 05:17:07.817992 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.44s 2026-04-16 05:17:07.818085 | orchestrator | k3s_agent : Create custom resolv.conf for k3s --------------------------- 1.36s 2026-04-16 05:17:07.818099 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.28s 2026-04-16 05:17:07.818110 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.26s 2026-04-16 05:17:07.984609 | orchestrator | + osism apply copy-kubeconfig 2026-04-16 05:17:19.506587 | orchestrator | 2026-04-16 05:17:19 | INFO  | Task 038ddce3-6463-48e9-bd84-472d9fd0f1ba (copy-kubeconfig) was prepared for execution. 2026-04-16 05:17:19.506729 | orchestrator | 2026-04-16 05:17:19 | INFO  | It takes a moment until task 038ddce3-6463-48e9-bd84-472d9fd0f1ba (copy-kubeconfig) has been started and output is visible here. 2026-04-16 05:17:25.599806 | orchestrator | 2026-04-16 05:17:25.599915 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-04-16 05:17:25.599931 | orchestrator | 2026-04-16 05:17:25.599943 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-16 05:17:25.599954 | orchestrator | Thursday 16 April 2026 05:17:23 +0000 (0:00:00.113) 0:00:00.113 ******** 2026-04-16 05:17:25.599965 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-16 05:17:25.599975 | orchestrator | 2026-04-16 05:17:25.599986 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-16 05:17:25.599996 | orchestrator | Thursday 16 April 2026 05:17:24 +0000 (0:00:00.659) 0:00:00.773 ******** 2026-04-16 05:17:25.600027 | orchestrator | changed: [testbed-manager] 2026-04-16 05:17:25.600039 | orchestrator | 2026-04-16 05:17:25.600050 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-04-16 05:17:25.600061 | orchestrator | Thursday 16 April 2026 05:17:25 +0000 (0:00:00.992) 0:00:01.765 ******** 2026-04-16 05:17:25.600071 | orchestrator | changed: [testbed-manager] 2026-04-16 05:17:25.600081 | orchestrator | 2026-04-16 05:17:25.600099 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:17:25.600129 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:17:25.600141 | orchestrator | 2026-04-16 05:17:25.600161 | orchestrator | 2026-04-16 05:17:25.600171 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:17:25.600181 | orchestrator | Thursday 16 April 2026 05:17:25 +0000 (0:00:00.376) 0:00:02.141 ******** 2026-04-16 05:17:25.600191 | orchestrator | =============================================================================== 2026-04-16 05:17:25.600200 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.99s 2026-04-16 05:17:25.600210 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2026-04-16 05:17:25.600220 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2026-04-16 05:17:25.782522 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-04-16 05:17:37.575806 | orchestrator | 2026-04-16 05:17:37 | INFO  | Task d3c754a0-fd7e-4098-b458-e5c652aa3bb8 (openstackclient) was prepared for execution. 2026-04-16 05:17:37.575919 | orchestrator | 2026-04-16 05:17:37 | INFO  | It takes a moment until task d3c754a0-fd7e-4098-b458-e5c652aa3bb8 (openstackclient) has been started and output is visible here. 2026-04-16 05:18:21.338012 | orchestrator | 2026-04-16 05:18:21.338199 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-16 05:18:21.338216 | orchestrator | 2026-04-16 05:18:21.338227 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-16 05:18:21.338237 | orchestrator | Thursday 16 April 2026 05:17:41 +0000 (0:00:00.168) 0:00:00.168 ******** 2026-04-16 05:18:21.338249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-16 05:18:21.338260 | orchestrator | 2026-04-16 05:18:21.338270 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-16 05:18:21.338305 | orchestrator | Thursday 16 April 2026 05:17:41 +0000 (0:00:00.166) 0:00:00.334 ******** 2026-04-16 05:18:21.338315 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-16 05:18:21.338326 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-16 05:18:21.338336 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-16 05:18:21.338346 | orchestrator | 2026-04-16 05:18:21.338356 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-16 05:18:21.338366 | orchestrator | Thursday 16 April 2026 05:17:42 +0000 (0:00:01.085) 0:00:01.419 ******** 2026-04-16 05:18:21.338376 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:21.338386 | orchestrator | 2026-04-16 05:18:21.338395 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-16 05:18:21.338405 | orchestrator | Thursday 16 April 2026 05:17:44 +0000 (0:00:01.176) 0:00:02.596 ******** 2026-04-16 05:18:21.338415 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-04-16 05:18:21.338470 | orchestrator | ok: [testbed-manager] 2026-04-16 05:18:21.338481 | orchestrator | 2026-04-16 05:18:21.338491 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-16 05:18:21.338501 | orchestrator | Thursday 16 April 2026 05:18:16 +0000 (0:00:32.465) 0:00:35.062 ******** 2026-04-16 05:18:21.338510 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:21.338520 | orchestrator | 2026-04-16 05:18:21.338529 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-16 05:18:21.338539 | orchestrator | Thursday 16 April 2026 05:18:17 +0000 (0:00:00.858) 0:00:35.921 ******** 2026-04-16 05:18:21.338549 | orchestrator | ok: [testbed-manager] 2026-04-16 05:18:21.338560 | orchestrator | 2026-04-16 05:18:21.338571 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-16 05:18:21.338583 | orchestrator | Thursday 16 April 2026 05:18:18 +0000 (0:00:00.633) 0:00:36.554 ******** 2026-04-16 05:18:21.338594 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:21.338604 | orchestrator | 2026-04-16 05:18:21.338616 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-16 05:18:21.338627 | orchestrator | Thursday 16 April 2026 05:18:19 +0000 (0:00:01.355) 0:00:37.910 ******** 2026-04-16 05:18:21.338638 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:21.338649 | orchestrator | 2026-04-16 05:18:21.338660 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-16 05:18:21.338671 | orchestrator | Thursday 16 April 2026 05:18:20 +0000 (0:00:00.654) 0:00:38.564 ******** 2026-04-16 05:18:21.338682 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:21.338693 | orchestrator | 2026-04-16 05:18:21.338704 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-16 05:18:21.338715 | orchestrator | Thursday 16 April 2026 05:18:20 +0000 (0:00:00.550) 0:00:39.115 ******** 2026-04-16 05:18:21.338725 | orchestrator | ok: [testbed-manager] 2026-04-16 05:18:21.338734 | orchestrator | 2026-04-16 05:18:21.338744 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:18:21.338754 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:18:21.338765 | orchestrator | 2026-04-16 05:18:21.338774 | orchestrator | 2026-04-16 05:18:21.338784 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:18:21.338793 | orchestrator | Thursday 16 April 2026 05:18:20 +0000 (0:00:00.381) 0:00:39.497 ******** 2026-04-16 05:18:21.338803 | orchestrator | =============================================================================== 2026-04-16 05:18:21.338813 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.47s 2026-04-16 05:18:21.338822 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.36s 2026-04-16 05:18:21.338839 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.18s 2026-04-16 05:18:21.338849 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.09s 2026-04-16 05:18:21.338859 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.86s 2026-04-16 05:18:21.338868 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.65s 2026-04-16 05:18:21.338878 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.63s 2026-04-16 05:18:21.338887 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.55s 2026-04-16 05:18:21.338897 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.38s 2026-04-16 05:18:21.338906 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.17s 2026-04-16 05:18:23.538776 | orchestrator | 2026-04-16 05:18:23 | INFO  | Task e7bc4df9-c6b7-413a-a70c-49169974371e (common) was prepared for execution. 2026-04-16 05:18:23.538878 | orchestrator | 2026-04-16 05:18:23 | INFO  | It takes a moment until task e7bc4df9-c6b7-413a-a70c-49169974371e (common) has been started and output is visible here. 2026-04-16 05:18:34.395062 | orchestrator | 2026-04-16 05:18:34.395183 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-16 05:18:34.395201 | orchestrator | 2026-04-16 05:18:34.395213 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-16 05:18:34.395225 | orchestrator | Thursday 16 April 2026 05:18:27 +0000 (0:00:00.202) 0:00:00.202 ******** 2026-04-16 05:18:34.395236 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:18:34.395251 | orchestrator | 2026-04-16 05:18:34.395270 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-16 05:18:34.395287 | orchestrator | Thursday 16 April 2026 05:18:28 +0000 (0:00:01.127) 0:00:01.329 ******** 2026-04-16 05:18:34.395305 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395323 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395342 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395360 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395378 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395395 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395443 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395463 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395480 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 05:18:34.395519 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395540 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395559 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395580 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395601 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395623 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395645 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 05:18:34.395665 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395716 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395738 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395759 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395779 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 05:18:34.395800 | orchestrator | 2026-04-16 05:18:34.395820 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-16 05:18:34.395840 | orchestrator | Thursday 16 April 2026 05:18:30 +0000 (0:00:02.442) 0:00:03.772 ******** 2026-04-16 05:18:34.395860 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:18:34.395883 | orchestrator | 2026-04-16 05:18:34.395903 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-16 05:18:34.395929 | orchestrator | Thursday 16 April 2026 05:18:32 +0000 (0:00:01.131) 0:00:04.904 ******** 2026-04-16 05:18:34.395955 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.395980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.396039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.396062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.396083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.396104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.396138 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:34.396158 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:34.396178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:34.396225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472663 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472728 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472776 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.472989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:35.473011 | orchestrator | 2026-04-16 05:18:35.473033 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-16 05:18:35.473052 | orchestrator | Thursday 16 April 2026 05:18:35 +0000 (0:00:03.199) 0:00:08.103 ******** 2026-04-16 05:18:35.473076 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:35.473097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:35.473115 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:35.473133 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:18:35.473152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:35.473197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.004846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.004976 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:18:36.005071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.005091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005114 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:18:36.005126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.005142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005165 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:18:36.005196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.005216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005239 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:18:36.005250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.005261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.005284 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:18:36.005295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.005314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.694652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.694754 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:18:36.694771 | orchestrator | 2026-04-16 05:18:36.694784 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-16 05:18:36.694796 | orchestrator | Thursday 16 April 2026 05:18:35 +0000 (0:00:00.790) 0:00:08.894 ******** 2026-04-16 05:18:36.694809 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.694823 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.694835 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.694846 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:18:36.694877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.694894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.694936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.694948 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:18:36.694988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.695014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.695026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.695037 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:18:36.695048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.695060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.695076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:36.695095 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:18:36.695107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:36.695137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:41.109001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:41.109109 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:18:41.109129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:41.109145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:41.109157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:41.109169 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:18:41.109181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 05:18:41.109215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:41.109227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:41.109238 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:18:41.109249 | orchestrator | 2026-04-16 05:18:41.109261 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-16 05:18:41.109273 | orchestrator | Thursday 16 April 2026 05:18:37 +0000 (0:00:01.452) 0:00:10.346 ******** 2026-04-16 05:18:41.109284 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:18:41.109295 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:18:41.109305 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:18:41.109316 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:18:41.109343 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:18:41.109355 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:18:41.109366 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:18:41.109376 | orchestrator | 2026-04-16 05:18:41.109388 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-16 05:18:41.109399 | orchestrator | Thursday 16 April 2026 05:18:38 +0000 (0:00:00.580) 0:00:10.926 ******** 2026-04-16 05:18:41.109410 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:18:41.109469 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:18:41.109480 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:18:41.109491 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:18:41.109503 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:18:41.109516 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:18:41.109529 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:18:41.109542 | orchestrator | 2026-04-16 05:18:41.109554 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-16 05:18:41.109566 | orchestrator | Thursday 16 April 2026 05:18:38 +0000 (0:00:00.759) 0:00:11.685 ******** 2026-04-16 05:18:41.109579 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:41.109610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:41.109623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:41.109649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:41.109663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:41.109674 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:41.109701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:43.835562 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835764 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:43.835913 | orchestrator | 2026-04-16 05:18:43.835925 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-16 05:18:43.835938 | orchestrator | Thursday 16 April 2026 05:18:42 +0000 (0:00:03.234) 0:00:14.919 ******** 2026-04-16 05:18:43.835957 | orchestrator | [WARNING]: Skipped 2026-04-16 05:18:43.835978 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-16 05:18:43.835997 | orchestrator | to this access issue: 2026-04-16 05:18:43.836015 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-16 05:18:43.836032 | orchestrator | directory 2026-04-16 05:18:43.836050 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 05:18:43.836068 | orchestrator | 2026-04-16 05:18:43.836088 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-16 05:18:43.836107 | orchestrator | Thursday 16 April 2026 05:18:42 +0000 (0:00:00.910) 0:00:15.830 ******** 2026-04-16 05:18:43.836125 | orchestrator | [WARNING]: Skipped 2026-04-16 05:18:43.836158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-16 05:18:53.082957 | orchestrator | to this access issue: 2026-04-16 05:18:53.083053 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-16 05:18:53.083065 | orchestrator | directory 2026-04-16 05:18:53.083074 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 05:18:53.083083 | orchestrator | 2026-04-16 05:18:53.083093 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-16 05:18:53.083103 | orchestrator | Thursday 16 April 2026 05:18:44 +0000 (0:00:01.183) 0:00:17.014 ******** 2026-04-16 05:18:53.083130 | orchestrator | [WARNING]: Skipped 2026-04-16 05:18:53.083139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-16 05:18:53.083148 | orchestrator | to this access issue: 2026-04-16 05:18:53.083157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-16 05:18:53.083165 | orchestrator | directory 2026-04-16 05:18:53.083174 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 05:18:53.083182 | orchestrator | 2026-04-16 05:18:53.083191 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-16 05:18:53.083200 | orchestrator | Thursday 16 April 2026 05:18:44 +0000 (0:00:00.774) 0:00:17.788 ******** 2026-04-16 05:18:53.083208 | orchestrator | [WARNING]: Skipped 2026-04-16 05:18:53.083217 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-16 05:18:53.083225 | orchestrator | to this access issue: 2026-04-16 05:18:53.083234 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-16 05:18:53.083243 | orchestrator | directory 2026-04-16 05:18:53.083251 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 05:18:53.083260 | orchestrator | 2026-04-16 05:18:53.083268 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-16 05:18:53.083277 | orchestrator | Thursday 16 April 2026 05:18:45 +0000 (0:00:00.797) 0:00:18.585 ******** 2026-04-16 05:18:53.083285 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:53.083294 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:18:53.083302 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:18:53.083311 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:18:53.083319 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:18:53.083328 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:18:53.083351 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:18:53.083361 | orchestrator | 2026-04-16 05:18:53.083369 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-16 05:18:53.083378 | orchestrator | Thursday 16 April 2026 05:18:48 +0000 (0:00:02.444) 0:00:21.030 ******** 2026-04-16 05:18:53.083386 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083396 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083405 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083456 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083465 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083474 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083487 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 05:18:53.083496 | orchestrator | 2026-04-16 05:18:53.083505 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-16 05:18:53.083515 | orchestrator | Thursday 16 April 2026 05:18:50 +0000 (0:00:02.031) 0:00:23.062 ******** 2026-04-16 05:18:53.083525 | orchestrator | changed: [testbed-manager] 2026-04-16 05:18:53.083535 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:18:53.083546 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:18:53.083556 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:18:53.083566 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:18:53.083576 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:18:53.083586 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:18:53.083596 | orchestrator | 2026-04-16 05:18:53.083605 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-16 05:18:53.083615 | orchestrator | Thursday 16 April 2026 05:18:52 +0000 (0:00:01.859) 0:00:24.921 ******** 2026-04-16 05:18:53.083637 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:53.083666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:53.083676 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:53.083685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:53.083695 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:53.083708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:53.083718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:53.083733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:53.083750 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:53.083767 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:58.956848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:58.956968 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:58.956997 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:58.957038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:58.957089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:58.957113 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:58.957134 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:58.957196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:18:58.957217 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:58.957236 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:58.957257 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:58.957277 | orchestrator | 2026-04-16 05:18:58.957299 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-16 05:18:58.957320 | orchestrator | Thursday 16 April 2026 05:18:53 +0000 (0:00:01.493) 0:00:26.415 ******** 2026-04-16 05:18:58.957338 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957358 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957390 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957441 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957463 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957482 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957501 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 05:18:58.957521 | orchestrator | 2026-04-16 05:18:58.957542 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-16 05:18:58.957562 | orchestrator | Thursday 16 April 2026 05:18:55 +0000 (0:00:01.831) 0:00:28.247 ******** 2026-04-16 05:18:58.957582 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957595 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957607 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957632 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957644 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957656 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957669 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 05:18:58.957681 | orchestrator | 2026-04-16 05:18:58.957694 | orchestrator | TASK [common : Check common containers] **************************************** 2026-04-16 05:18:58.957707 | orchestrator | Thursday 16 April 2026 05:18:56 +0000 (0:00:01.617) 0:00:29.864 ******** 2026-04-16 05:18:58.957721 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:58.957747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:59.586990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:59.587101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:59.587138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:59.587163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:59.587175 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587195 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 05:18:59.587222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587256 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587268 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:18:59.587318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:20:24.271068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:20:24.271216 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:20:24.271241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:20:24.271279 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:20:24.271300 | orchestrator | 2026-04-16 05:20:24.271321 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-04-16 05:20:24.271341 | orchestrator | Thursday 16 April 2026 05:18:59 +0000 (0:00:02.613) 0:00:32.477 ******** 2026-04-16 05:20:24.271359 | orchestrator | changed: [testbed-manager] 2026-04-16 05:20:24.271378 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:24.271395 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:24.271504 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:24.271523 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:20:24.271541 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:20:24.271558 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:20:24.271576 | orchestrator | 2026-04-16 05:20:24.271597 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-04-16 05:20:24.271618 | orchestrator | Thursday 16 April 2026 05:19:00 +0000 (0:00:01.409) 0:00:33.886 ******** 2026-04-16 05:20:24.271639 | orchestrator | changed: [testbed-manager] 2026-04-16 05:20:24.271660 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:24.271681 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:24.271702 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:24.271721 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:20:24.271742 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:20:24.271762 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:20:24.271781 | orchestrator | 2026-04-16 05:20:24.271802 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.271821 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:01.115) 0:00:35.002 ******** 2026-04-16 05:20:24.271840 | orchestrator | 2026-04-16 05:20:24.271859 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.271879 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.062) 0:00:35.065 ******** 2026-04-16 05:20:24.271899 | orchestrator | 2026-04-16 05:20:24.271919 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.271940 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.063) 0:00:35.128 ******** 2026-04-16 05:20:24.271959 | orchestrator | 2026-04-16 05:20:24.271979 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.271999 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.065) 0:00:35.193 ******** 2026-04-16 05:20:24.272017 | orchestrator | 2026-04-16 05:20:24.272037 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.272075 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.231) 0:00:35.425 ******** 2026-04-16 05:20:24.272096 | orchestrator | 2026-04-16 05:20:24.272115 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.272134 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.058) 0:00:35.484 ******** 2026-04-16 05:20:24.272153 | orchestrator | 2026-04-16 05:20:24.272174 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 05:20:24.272194 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.062) 0:00:35.546 ******** 2026-04-16 05:20:24.272214 | orchestrator | 2026-04-16 05:20:24.272234 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-16 05:20:24.272254 | orchestrator | Thursday 16 April 2026 05:19:02 +0000 (0:00:00.085) 0:00:35.632 ******** 2026-04-16 05:20:24.272274 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:24.272293 | orchestrator | changed: [testbed-manager] 2026-04-16 05:20:24.272314 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:24.272334 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:24.272353 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:20:24.272423 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:20:24.272448 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:20:24.272466 | orchestrator | 2026-04-16 05:20:24.272484 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-16 05:20:24.272501 | orchestrator | Thursday 16 April 2026 05:19:38 +0000 (0:00:35.940) 0:01:11.573 ******** 2026-04-16 05:20:24.272519 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:24.272537 | orchestrator | changed: [testbed-manager] 2026-04-16 05:20:24.272554 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:24.272571 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:24.272588 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:20:24.272603 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:20:24.272620 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:20:24.272637 | orchestrator | 2026-04-16 05:20:24.272654 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-16 05:20:24.272670 | orchestrator | Thursday 16 April 2026 05:20:13 +0000 (0:00:35.137) 0:01:46.710 ******** 2026-04-16 05:20:24.272686 | orchestrator | ok: [testbed-manager] 2026-04-16 05:20:24.272704 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:20:24.272721 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:20:24.272738 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:20:24.272754 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:20:24.272770 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:20:24.272786 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:20:24.272803 | orchestrator | 2026-04-16 05:20:24.272820 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-16 05:20:24.272836 | orchestrator | Thursday 16 April 2026 05:20:15 +0000 (0:00:01.879) 0:01:48.590 ******** 2026-04-16 05:20:24.272852 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:24.272944 | orchestrator | changed: [testbed-manager] 2026-04-16 05:20:24.272964 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:24.272981 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:24.272998 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:20:24.273016 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:20:24.273035 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:20:24.273053 | orchestrator | 2026-04-16 05:20:24.273071 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:20:24.273091 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273110 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273145 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273181 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273200 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273219 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273237 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 05:20:24.273253 | orchestrator | 2026-04-16 05:20:24.273270 | orchestrator | 2026-04-16 05:20:24.273289 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:20:24.273308 | orchestrator | Thursday 16 April 2026 05:20:24 +0000 (0:00:08.543) 0:01:57.134 ******** 2026-04-16 05:20:24.273327 | orchestrator | =============================================================================== 2026-04-16 05:20:24.273344 | orchestrator | common : Restart fluentd container ------------------------------------- 35.94s 2026-04-16 05:20:24.273363 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.14s 2026-04-16 05:20:24.273379 | orchestrator | common : Restart cron container ----------------------------------------- 8.54s 2026-04-16 05:20:24.273425 | orchestrator | common : Copying over config.json files for services -------------------- 3.23s 2026-04-16 05:20:24.273447 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.20s 2026-04-16 05:20:24.273464 | orchestrator | common : Check common containers ---------------------------------------- 2.61s 2026-04-16 05:20:24.273481 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.44s 2026-04-16 05:20:24.273493 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.44s 2026-04-16 05:20:24.273504 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.03s 2026-04-16 05:20:24.273515 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.88s 2026-04-16 05:20:24.273526 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.86s 2026-04-16 05:20:24.273536 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.83s 2026-04-16 05:20:24.273547 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.62s 2026-04-16 05:20:24.273557 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.49s 2026-04-16 05:20:24.273568 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.45s 2026-04-16 05:20:24.273578 | orchestrator | common : Creating log volume -------------------------------------------- 1.41s 2026-04-16 05:20:24.273605 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.18s 2026-04-16 05:20:24.647833 | orchestrator | common : include_tasks -------------------------------------------------- 1.13s 2026-04-16 05:20:24.647937 | orchestrator | common : include_tasks -------------------------------------------------- 1.13s 2026-04-16 05:20:24.647952 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.12s 2026-04-16 05:20:26.905322 | orchestrator | 2026-04-16 05:20:26 | INFO  | Task 20fa10f5-a703-4b4b-a288-5e059468c92c (loadbalancer) was prepared for execution. 2026-04-16 05:20:26.905465 | orchestrator | 2026-04-16 05:20:26 | INFO  | It takes a moment until task 20fa10f5-a703-4b4b-a288-5e059468c92c (loadbalancer) has been started and output is visible here. 2026-04-16 05:20:39.246009 | orchestrator | 2026-04-16 05:20:39.246184 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:20:39.246201 | orchestrator | 2026-04-16 05:20:39.246213 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:20:39.246225 | orchestrator | Thursday 16 April 2026 05:20:30 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-04-16 05:20:39.246260 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:20:39.246274 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:20:39.246284 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:20:39.246295 | orchestrator | 2026-04-16 05:20:39.246306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:20:39.246318 | orchestrator | Thursday 16 April 2026 05:20:30 +0000 (0:00:00.212) 0:00:00.396 ******** 2026-04-16 05:20:39.246329 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-16 05:20:39.246340 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-16 05:20:39.246350 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-16 05:20:39.246361 | orchestrator | 2026-04-16 05:20:39.246372 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-16 05:20:39.246382 | orchestrator | 2026-04-16 05:20:39.246393 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-16 05:20:39.246451 | orchestrator | Thursday 16 April 2026 05:20:31 +0000 (0:00:00.297) 0:00:00.694 ******** 2026-04-16 05:20:39.246463 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:20:39.246474 | orchestrator | 2026-04-16 05:20:39.246485 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-16 05:20:39.246496 | orchestrator | Thursday 16 April 2026 05:20:31 +0000 (0:00:00.456) 0:00:01.150 ******** 2026-04-16 05:20:39.246507 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:20:39.246518 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:20:39.246529 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:20:39.246540 | orchestrator | 2026-04-16 05:20:39.246551 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-16 05:20:39.246562 | orchestrator | Thursday 16 April 2026 05:20:32 +0000 (0:00:00.560) 0:00:01.711 ******** 2026-04-16 05:20:39.246573 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:20:39.246584 | orchestrator | 2026-04-16 05:20:39.246595 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-16 05:20:39.246605 | orchestrator | Thursday 16 April 2026 05:20:32 +0000 (0:00:00.563) 0:00:02.274 ******** 2026-04-16 05:20:39.246616 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:20:39.246627 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:20:39.246638 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:20:39.246649 | orchestrator | 2026-04-16 05:20:39.246660 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-16 05:20:39.246671 | orchestrator | Thursday 16 April 2026 05:20:33 +0000 (0:00:00.568) 0:00:02.843 ******** 2026-04-16 05:20:39.246681 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-16 05:20:39.246692 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-16 05:20:39.246703 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-16 05:20:39.246714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-16 05:20:39.246725 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-16 05:20:39.246736 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-16 05:20:39.246746 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-16 05:20:39.246758 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-16 05:20:39.246769 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-16 05:20:39.246780 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-16 05:20:39.246791 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-16 05:20:39.246809 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-16 05:20:39.246820 | orchestrator | 2026-04-16 05:20:39.246831 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 05:20:39.246842 | orchestrator | Thursday 16 April 2026 05:20:35 +0000 (0:00:01.971) 0:00:04.814 ******** 2026-04-16 05:20:39.246853 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-16 05:20:39.246865 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-16 05:20:39.246876 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-16 05:20:39.246887 | orchestrator | 2026-04-16 05:20:39.246898 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 05:20:39.246909 | orchestrator | Thursday 16 April 2026 05:20:35 +0000 (0:00:00.670) 0:00:05.485 ******** 2026-04-16 05:20:39.246920 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-04-16 05:20:39.246931 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-04-16 05:20:39.246942 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-04-16 05:20:39.246953 | orchestrator | 2026-04-16 05:20:39.246964 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 05:20:39.246975 | orchestrator | Thursday 16 April 2026 05:20:37 +0000 (0:00:01.204) 0:00:06.689 ******** 2026-04-16 05:20:39.246986 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-16 05:20:39.246997 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:20:39.247026 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-16 05:20:39.247038 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:20:39.247049 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-16 05:20:39.247059 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:20:39.247070 | orchestrator | 2026-04-16 05:20:39.247081 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-16 05:20:39.247092 | orchestrator | Thursday 16 April 2026 05:20:37 +0000 (0:00:00.450) 0:00:07.140 ******** 2026-04-16 05:20:39.247111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:39.247131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:39.247143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:39.247161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:39.247173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:39.247191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:44.304891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:20:44.305011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:20:44.305030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:20:44.305043 | orchestrator | 2026-04-16 05:20:44.305056 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-16 05:20:44.305069 | orchestrator | Thursday 16 April 2026 05:20:39 +0000 (0:00:01.623) 0:00:08.763 ******** 2026-04-16 05:20:44.305080 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:44.305114 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:44.305125 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:44.305137 | orchestrator | 2026-04-16 05:20:44.305148 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-16 05:20:44.305160 | orchestrator | Thursday 16 April 2026 05:20:40 +0000 (0:00:00.867) 0:00:09.630 ******** 2026-04-16 05:20:44.305171 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-04-16 05:20:44.305182 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-04-16 05:20:44.305193 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-04-16 05:20:44.305204 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-04-16 05:20:44.305215 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-04-16 05:20:44.305225 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-04-16 05:20:44.305236 | orchestrator | 2026-04-16 05:20:44.305247 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-16 05:20:44.305258 | orchestrator | Thursday 16 April 2026 05:20:41 +0000 (0:00:01.425) 0:00:11.055 ******** 2026-04-16 05:20:44.305269 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:20:44.305280 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:20:44.305290 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:20:44.305301 | orchestrator | 2026-04-16 05:20:44.305312 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-16 05:20:44.305323 | orchestrator | Thursday 16 April 2026 05:20:42 +0000 (0:00:00.902) 0:00:11.958 ******** 2026-04-16 05:20:44.305334 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:20:44.305344 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:20:44.305355 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:20:44.305368 | orchestrator | 2026-04-16 05:20:44.305381 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-16 05:20:44.305394 | orchestrator | Thursday 16 April 2026 05:20:43 +0000 (0:00:01.281) 0:00:13.240 ******** 2026-04-16 05:20:44.305441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:20:44.305476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:20:44.305491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:20:44.305507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 05:20:44.305529 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:20:44.305543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:20:44.305593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:20:44.305607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:20:44.305620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 05:20:44.305633 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:20:44.305653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:20:46.904464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:20:46.904597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:20:46.904615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 05:20:46.904628 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:20:46.904642 | orchestrator | 2026-04-16 05:20:46.904653 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-16 05:20:46.904666 | orchestrator | Thursday 16 April 2026 05:20:44 +0000 (0:00:00.587) 0:00:13.828 ******** 2026-04-16 05:20:46.904677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:46.904690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:46.904701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:46.904755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:46.904769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:20:46.904781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 05:20:46.904792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:46.904803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:20:46.904814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 05:20:46.904856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:20:54.861602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e', '__omit_place_holder__1700c3aae7e46fe44d6243874f1dafa7559f632e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 05:20:54.861628 | orchestrator | 2026-04-16 05:20:54.861650 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-16 05:20:54.861670 | orchestrator | Thursday 16 April 2026 05:20:46 +0000 (0:00:02.594) 0:00:16.422 ******** 2026-04-16 05:20:54.861690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:20:54.861882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:20:54.861894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:20:54.861906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:20:54.861916 | orchestrator | 2026-04-16 05:20:54.861928 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-16 05:20:54.861939 | orchestrator | Thursday 16 April 2026 05:20:49 +0000 (0:00:02.980) 0:00:19.403 ******** 2026-04-16 05:20:54.861960 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-16 05:20:54.861974 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-16 05:20:54.861986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-16 05:20:54.861999 | orchestrator | 2026-04-16 05:20:54.862012 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-16 05:20:54.862087 | orchestrator | Thursday 16 April 2026 05:20:51 +0000 (0:00:01.732) 0:00:21.135 ******** 2026-04-16 05:20:54.862101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-16 05:20:54.862114 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-16 05:20:54.862126 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-16 05:20:54.862139 | orchestrator | 2026-04-16 05:20:54.862151 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-16 05:20:54.862164 | orchestrator | Thursday 16 April 2026 05:20:54 +0000 (0:00:02.713) 0:00:23.849 ******** 2026-04-16 05:20:54.862177 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:20:54.862191 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:20:54.862204 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:20:54.862217 | orchestrator | 2026-04-16 05:20:54.862239 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-16 05:21:05.877254 | orchestrator | Thursday 16 April 2026 05:20:54 +0000 (0:00:00.539) 0:00:24.388 ******** 2026-04-16 05:21:05.877468 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-16 05:21:05.877504 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-16 05:21:05.877516 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-16 05:21:05.877527 | orchestrator | 2026-04-16 05:21:05.877539 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-16 05:21:05.877551 | orchestrator | Thursday 16 April 2026 05:20:56 +0000 (0:00:01.976) 0:00:26.364 ******** 2026-04-16 05:21:05.877563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-16 05:21:05.877575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-16 05:21:05.877585 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-16 05:21:05.877596 | orchestrator | 2026-04-16 05:21:05.877607 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-16 05:21:05.877618 | orchestrator | Thursday 16 April 2026 05:20:58 +0000 (0:00:02.024) 0:00:28.389 ******** 2026-04-16 05:21:05.877630 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-04-16 05:21:05.877642 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-04-16 05:21:05.877653 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-04-16 05:21:05.877663 | orchestrator | 2026-04-16 05:21:05.877688 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-16 05:21:05.877699 | orchestrator | Thursday 16 April 2026 05:21:00 +0000 (0:00:01.339) 0:00:29.729 ******** 2026-04-16 05:21:05.877712 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-16 05:21:05.877723 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-16 05:21:05.877733 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-16 05:21:05.877744 | orchestrator | 2026-04-16 05:21:05.877755 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-16 05:21:05.877794 | orchestrator | Thursday 16 April 2026 05:21:01 +0000 (0:00:01.366) 0:00:31.095 ******** 2026-04-16 05:21:05.877807 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:21:05.877820 | orchestrator | 2026-04-16 05:21:05.877832 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-04-16 05:21:05.877845 | orchestrator | Thursday 16 April 2026 05:21:02 +0000 (0:00:00.523) 0:00:31.619 ******** 2026-04-16 05:21:05.877861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 05:21:05.877879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 05:21:05.877897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 05:21:05.877933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:21:05.877947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:21:05.877958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:21:05.877980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:21:05.877992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:21:05.878004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:21:05.878087 | orchestrator | 2026-04-16 05:21:05.878100 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-04-16 05:21:05.878112 | orchestrator | Thursday 16 April 2026 05:21:05 +0000 (0:00:03.117) 0:00:34.737 ******** 2026-04-16 05:21:05.878140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:06.634596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:06.634710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:06.634751 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:06.634766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:06.634779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:06.634790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:06.634801 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:06.634813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:06.634866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:06.634879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:06.634898 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:06.634910 | orchestrator | 2026-04-16 05:21:06.634922 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-04-16 05:21:06.634935 | orchestrator | Thursday 16 April 2026 05:21:05 +0000 (0:00:00.666) 0:00:35.403 ******** 2026-04-16 05:21:06.634947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:06.634959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:06.634970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:06.634981 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:06.634993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:06.635016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:07.445988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:07.446136 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:07.446152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:07.446164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:07.446173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:07.446181 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:07.446189 | orchestrator | 2026-04-16 05:21:07.446198 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-16 05:21:07.446236 | orchestrator | Thursday 16 April 2026 05:21:06 +0000 (0:00:00.757) 0:00:36.160 ******** 2026-04-16 05:21:07.446245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:07.446254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:07.446278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:07.446293 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:07.446301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:07.446311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:07.446324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:07.446337 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:07.446351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:07.446382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:07.446452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:07.446489 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:08.731635 | orchestrator | 2026-04-16 05:21:08.731737 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-16 05:21:08.731753 | orchestrator | Thursday 16 April 2026 05:21:07 +0000 (0:00:00.798) 0:00:36.959 ******** 2026-04-16 05:21:08.731770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:08.731786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:08.731798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:08.731810 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:08.731823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:08.731835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:08.731871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:08.731902 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:08.731940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:08.731961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:08.731980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:08.731997 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:08.732015 | orchestrator | 2026-04-16 05:21:08.732034 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-16 05:21:08.732051 | orchestrator | Thursday 16 April 2026 05:21:07 +0000 (0:00:00.554) 0:00:37.514 ******** 2026-04-16 05:21:08.732071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:08.732090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:08.732139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:08.732161 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:08.732196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:09.730685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:09.730804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:09.730834 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:09.730855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:09.730874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:09.730893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:09.730941 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:09.730963 | orchestrator | 2026-04-16 05:21:09.730982 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-04-16 05:21:09.731004 | orchestrator | Thursday 16 April 2026 05:21:08 +0000 (0:00:00.738) 0:00:38.253 ******** 2026-04-16 05:21:09.731045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:09.731097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:09.731117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:09.731134 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:09.731153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:09.731172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:09.731191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:09.731226 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:09.731330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:09.731361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:11.063917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:11.064032 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:11.064051 | orchestrator | 2026-04-16 05:21:11.064064 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-04-16 05:21:11.064076 | orchestrator | Thursday 16 April 2026 05:21:09 +0000 (0:00:00.996) 0:00:39.249 ******** 2026-04-16 05:21:11.064090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:11.064105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:11.064153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:11.064176 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:11.064196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:11.064234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:11.064281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:11.064302 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:11.064321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:11.064341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:11.064361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:11.064389 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:11.064429 | orchestrator | 2026-04-16 05:21:11.064442 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-04-16 05:21:11.064456 | orchestrator | Thursday 16 April 2026 05:21:10 +0000 (0:00:00.585) 0:00:39.834 ******** 2026-04-16 05:21:11.064469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 05:21:11.064483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:11.064514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:17.216166 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:17.216278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 05:21:17.216300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:17.216335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:17.216348 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:17.216360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 05:21:17.216386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 05:21:17.216462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 05:21:17.216475 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:17.216487 | orchestrator | 2026-04-16 05:21:17.216499 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-16 05:21:17.216511 | orchestrator | Thursday 16 April 2026 05:21:11 +0000 (0:00:00.755) 0:00:40.590 ******** 2026-04-16 05:21:17.216522 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-16 05:21:17.216552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-16 05:21:17.216564 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-16 05:21:17.216575 | orchestrator | 2026-04-16 05:21:17.216585 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-16 05:21:17.216598 | orchestrator | Thursday 16 April 2026 05:21:12 +0000 (0:00:01.599) 0:00:42.190 ******** 2026-04-16 05:21:17.216609 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-16 05:21:17.216621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-16 05:21:17.216632 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-16 05:21:17.216642 | orchestrator | 2026-04-16 05:21:17.216653 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-16 05:21:17.216670 | orchestrator | Thursday 16 April 2026 05:21:14 +0000 (0:00:01.583) 0:00:43.774 ******** 2026-04-16 05:21:17.216681 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 05:21:17.216693 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 05:21:17.216706 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 05:21:17.216718 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 05:21:17.216731 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:17.216744 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 05:21:17.216757 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:17.216768 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 05:21:17.216781 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:17.216793 | orchestrator | 2026-04-16 05:21:17.216806 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-04-16 05:21:17.216818 | orchestrator | Thursday 16 April 2026 05:21:15 +0000 (0:00:00.770) 0:00:44.545 ******** 2026-04-16 05:21:17.216832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 05:21:17.216845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 05:21:17.216881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 05:21:17.216917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:21:21.020202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:21:21.020306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 05:21:21.020322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:21:21.020335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:21:21.020347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 05:21:21.020359 | orchestrator | 2026-04-16 05:21:21.020389 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-16 05:21:21.020439 | orchestrator | Thursday 16 April 2026 05:21:17 +0000 (0:00:02.195) 0:00:46.740 ******** 2026-04-16 05:21:21.020451 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:21:21.020462 | orchestrator | 2026-04-16 05:21:21.020473 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-16 05:21:21.020484 | orchestrator | Thursday 16 April 2026 05:21:17 +0000 (0:00:00.769) 0:00:47.510 ******** 2026-04-16 05:21:21.020516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 05:21:21.020550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 05:21:21.020563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.020575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.020587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 05:21:21.020619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 05:21:21.020644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.020674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.651992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 05:21:21.652104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 05:21:21.652121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.652151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.652163 | orchestrator | 2026-04-16 05:21:21.652177 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-16 05:21:21.652189 | orchestrator | Thursday 16 April 2026 05:21:21 +0000 (0:00:03.034) 0:00:50.545 ******** 2026-04-16 05:21:21.652202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 05:21:21.652254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 05:21:21.652268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.652280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.652291 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:21.652303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 05:21:21.652321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 05:21:21.652344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.652355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 05:21:21.652367 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:21.652388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 05:21:29.592006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 05:21:29.592158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:29.592186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 05:21:29.592237 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:29.592259 | orchestrator | 2026-04-16 05:21:29.592279 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-16 05:21:29.592299 | orchestrator | Thursday 16 April 2026 05:21:21 +0000 (0:00:00.628) 0:00:51.174 ******** 2026-04-16 05:21:29.592320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-16 05:21:29.592341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-16 05:21:29.592360 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:29.592388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-16 05:21:29.592428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-16 05:21:29.592439 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:29.592450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-04-16 05:21:29.592461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-04-16 05:21:29.592472 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:29.592483 | orchestrator | 2026-04-16 05:21:29.592494 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-16 05:21:29.592508 | orchestrator | Thursday 16 April 2026 05:21:22 +0000 (0:00:01.025) 0:00:52.199 ******** 2026-04-16 05:21:29.592521 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:21:29.592533 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:21:29.592545 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:21:29.592557 | orchestrator | 2026-04-16 05:21:29.592571 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-16 05:21:29.592583 | orchestrator | Thursday 16 April 2026 05:21:23 +0000 (0:00:01.234) 0:00:53.433 ******** 2026-04-16 05:21:29.592596 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:21:29.592608 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:21:29.592620 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:21:29.592632 | orchestrator | 2026-04-16 05:21:29.592645 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-16 05:21:29.592658 | orchestrator | Thursday 16 April 2026 05:21:25 +0000 (0:00:01.930) 0:00:55.364 ******** 2026-04-16 05:21:29.592670 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:21:29.592683 | orchestrator | 2026-04-16 05:21:29.592716 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-16 05:21:29.592728 | orchestrator | Thursday 16 April 2026 05:21:26 +0000 (0:00:00.594) 0:00:55.959 ******** 2026-04-16 05:21:29.592743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 05:21:29.592772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:29.592786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:29.592798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 05:21:29.592810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:29.592830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.160822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 05:21:30.160944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.160960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.160974 | orchestrator | 2026-04-16 05:21:30.160987 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-16 05:21:30.161000 | orchestrator | Thursday 16 April 2026 05:21:29 +0000 (0:00:03.158) 0:00:59.117 ******** 2026-04-16 05:21:30.161013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 05:21:30.161025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.161076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.161090 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:30.161109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 05:21:30.161120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.161132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:30.161143 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:30.161154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 05:21:30.161181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 05:21:39.316238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:39.316355 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:39.316379 | orchestrator | 2026-04-16 05:21:39.316458 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-16 05:21:39.316482 | orchestrator | Thursday 16 April 2026 05:21:30 +0000 (0:00:00.571) 0:00:59.688 ******** 2026-04-16 05:21:39.316516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-16 05:21:39.316534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-16 05:21:39.316551 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:39.316565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-16 05:21:39.316580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-16 05:21:39.316595 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:39.316609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-16 05:21:39.316623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-04-16 05:21:39.316638 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:39.316652 | orchestrator | 2026-04-16 05:21:39.316667 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-16 05:21:39.316682 | orchestrator | Thursday 16 April 2026 05:21:30 +0000 (0:00:00.802) 0:01:00.490 ******** 2026-04-16 05:21:39.316696 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:21:39.316713 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:21:39.316728 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:21:39.316742 | orchestrator | 2026-04-16 05:21:39.316756 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-16 05:21:39.316765 | orchestrator | Thursday 16 April 2026 05:21:32 +0000 (0:00:01.485) 0:01:01.976 ******** 2026-04-16 05:21:39.316796 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:21:39.316807 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:21:39.316817 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:21:39.316826 | orchestrator | 2026-04-16 05:21:39.316837 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-16 05:21:39.316847 | orchestrator | Thursday 16 April 2026 05:21:34 +0000 (0:00:01.891) 0:01:03.868 ******** 2026-04-16 05:21:39.316858 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:39.316867 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:39.316877 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:39.316887 | orchestrator | 2026-04-16 05:21:39.316897 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-16 05:21:39.316907 | orchestrator | Thursday 16 April 2026 05:21:34 +0000 (0:00:00.282) 0:01:04.150 ******** 2026-04-16 05:21:39.316917 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:21:39.316927 | orchestrator | 2026-04-16 05:21:39.316937 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-16 05:21:39.316948 | orchestrator | Thursday 16 April 2026 05:21:35 +0000 (0:00:00.608) 0:01:04.759 ******** 2026-04-16 05:21:39.316980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-16 05:21:39.316999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-16 05:21:39.317010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-04-16 05:21:39.317020 | orchestrator | 2026-04-16 05:21:39.317029 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-16 05:21:39.317038 | orchestrator | Thursday 16 April 2026 05:21:38 +0000 (0:00:02.817) 0:01:07.577 ******** 2026-04-16 05:21:39.317054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-16 05:21:39.317063 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:39.317072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-16 05:21:39.317081 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:39.317097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-04-16 05:21:46.379760 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:46.379853 | orchestrator | 2026-04-16 05:21:46.379866 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-16 05:21:46.379876 | orchestrator | Thursday 16 April 2026 05:21:39 +0000 (0:00:01.267) 0:01:08.844 ******** 2026-04-16 05:21:46.379917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-16 05:21:46.379929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-16 05:21:46.379939 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:46.379947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-16 05:21:46.379971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-16 05:21:46.379979 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:46.379987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-16 05:21:46.379995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-04-16 05:21:46.380003 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:46.380011 | orchestrator | 2026-04-16 05:21:46.380019 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-16 05:21:46.380027 | orchestrator | Thursday 16 April 2026 05:21:40 +0000 (0:00:01.499) 0:01:10.344 ******** 2026-04-16 05:21:46.380035 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:46.380042 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:46.380050 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:46.380058 | orchestrator | 2026-04-16 05:21:46.380068 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-16 05:21:46.380077 | orchestrator | Thursday 16 April 2026 05:21:41 +0000 (0:00:00.397) 0:01:10.742 ******** 2026-04-16 05:21:46.380084 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:46.380092 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:46.380100 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:46.380107 | orchestrator | 2026-04-16 05:21:46.380115 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-16 05:21:46.380123 | orchestrator | Thursday 16 April 2026 05:21:42 +0000 (0:00:01.202) 0:01:11.944 ******** 2026-04-16 05:21:46.380131 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:21:46.380139 | orchestrator | 2026-04-16 05:21:46.380146 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-16 05:21:46.380154 | orchestrator | Thursday 16 April 2026 05:21:43 +0000 (0:00:00.865) 0:01:12.809 ******** 2026-04-16 05:21:46.380183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 05:21:46.380202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:21:46.380212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 05:21:46.380221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 05:21:46.380230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 05:21:46.380244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.008865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.008988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 05:21:47.009018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009087 | orchestrator | 2026-04-16 05:21:47.009101 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-16 05:21:47.009113 | orchestrator | Thursday 16 April 2026 05:21:46 +0000 (0:00:03.185) 0:01:15.994 ******** 2026-04-16 05:21:47.009125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 05:21:47.009137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 05:21:47.009171 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:47.009198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 05:21:52.802324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:21:52.802536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 05:21:52.802558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 05:21:52.802572 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:52.802586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 05:21:52.802598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:21:52.802684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 05:21:52.802698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 05:21:52.802710 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:52.802721 | orchestrator | 2026-04-16 05:21:52.802733 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-16 05:21:52.802746 | orchestrator | Thursday 16 April 2026 05:21:47 +0000 (0:00:00.641) 0:01:16.636 ******** 2026-04-16 05:21:52.802758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-16 05:21:52.802770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-16 05:21:52.802783 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:52.802793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-16 05:21:52.802804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-16 05:21:52.802815 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:52.802826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-16 05:21:52.802836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-04-16 05:21:52.802847 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:52.802858 | orchestrator | 2026-04-16 05:21:52.802868 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-16 05:21:52.802879 | orchestrator | Thursday 16 April 2026 05:21:48 +0000 (0:00:01.087) 0:01:17.724 ******** 2026-04-16 05:21:52.802890 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:21:52.802908 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:21:52.802919 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:21:52.802929 | orchestrator | 2026-04-16 05:21:52.802940 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-16 05:21:52.802951 | orchestrator | Thursday 16 April 2026 05:21:49 +0000 (0:00:01.263) 0:01:18.988 ******** 2026-04-16 05:21:52.802962 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:21:52.802973 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:21:52.802984 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:21:52.802995 | orchestrator | 2026-04-16 05:21:52.803006 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-16 05:21:52.803016 | orchestrator | Thursday 16 April 2026 05:21:51 +0000 (0:00:01.858) 0:01:20.846 ******** 2026-04-16 05:21:52.803027 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:52.803038 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:52.803048 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:52.803059 | orchestrator | 2026-04-16 05:21:52.803069 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-16 05:21:52.803080 | orchestrator | Thursday 16 April 2026 05:21:51 +0000 (0:00:00.295) 0:01:21.141 ******** 2026-04-16 05:21:52.803091 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:52.803101 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:52.803112 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:21:52.803122 | orchestrator | 2026-04-16 05:21:52.803133 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-16 05:21:52.803144 | orchestrator | Thursday 16 April 2026 05:21:51 +0000 (0:00:00.259) 0:01:21.400 ******** 2026-04-16 05:21:52.803154 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:21:52.803165 | orchestrator | 2026-04-16 05:21:52.803176 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-16 05:21:52.803191 | orchestrator | Thursday 16 April 2026 05:21:52 +0000 (0:00:00.927) 0:01:22.328 ******** 2026-04-16 05:21:55.930889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 05:21:55.930998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 05:21:55.931014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 05:21:55.931122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 05:21:55.931164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 05:21:55.931212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 05:21:56.941878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 05:21:56.941892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.941998 | orchestrator | 2026-04-16 05:21:56.942013 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-16 05:21:56.942087 | orchestrator | Thursday 16 April 2026 05:21:56 +0000 (0:00:03.357) 0:01:25.686 ******** 2026-04-16 05:21:56.942099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 05:21:56.942111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 05:21:56.942123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.942135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 05:21:56.942156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.179133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.179260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.179278 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:21:57.179294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 05:21:57.179307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 05:21:57.179905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.179930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.179962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.179988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.180005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.180024 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:21:57.180047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 05:21:57.180072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 05:21:57.180102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 05:21:57.180146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 05:22:06.371092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 05:22:06.371219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:22:06.371236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 05:22:06.371249 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:06.371264 | orchestrator | 2026-04-16 05:22:06.371276 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-16 05:22:06.371289 | orchestrator | Thursday 16 April 2026 05:21:57 +0000 (0:00:01.019) 0:01:26.706 ******** 2026-04-16 05:22:06.371301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-16 05:22:06.371314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-16 05:22:06.371326 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:06.371337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-16 05:22:06.371348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-16 05:22:06.371359 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:06.371370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-04-16 05:22:06.371481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-04-16 05:22:06.371496 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:06.371507 | orchestrator | 2026-04-16 05:22:06.371518 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-16 05:22:06.371529 | orchestrator | Thursday 16 April 2026 05:21:58 +0000 (0:00:01.225) 0:01:27.931 ******** 2026-04-16 05:22:06.371541 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:06.371552 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:06.371565 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:06.371578 | orchestrator | 2026-04-16 05:22:06.371591 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-16 05:22:06.371604 | orchestrator | Thursday 16 April 2026 05:21:59 +0000 (0:00:01.229) 0:01:29.161 ******** 2026-04-16 05:22:06.371617 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:06.371629 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:06.371642 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:06.371654 | orchestrator | 2026-04-16 05:22:06.371667 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-16 05:22:06.371680 | orchestrator | Thursday 16 April 2026 05:22:01 +0000 (0:00:01.863) 0:01:31.024 ******** 2026-04-16 05:22:06.371709 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:06.371721 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:06.371731 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:06.371742 | orchestrator | 2026-04-16 05:22:06.371753 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-16 05:22:06.371764 | orchestrator | Thursday 16 April 2026 05:22:01 +0000 (0:00:00.292) 0:01:31.317 ******** 2026-04-16 05:22:06.371775 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:22:06.371786 | orchestrator | 2026-04-16 05:22:06.371796 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-16 05:22:06.371807 | orchestrator | Thursday 16 April 2026 05:22:02 +0000 (0:00:00.940) 0:01:32.257 ******** 2026-04-16 05:22:06.371829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 05:22:06.371844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 05:22:06.371880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 05:22:09.150952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 05:22:09.151097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 05:22:09.151139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 05:22:09.151161 | orchestrator | 2026-04-16 05:22:09.151175 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-16 05:22:09.151187 | orchestrator | Thursday 16 April 2026 05:22:06 +0000 (0:00:03.754) 0:01:36.012 ******** 2026-04-16 05:22:09.151206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 05:22:09.151230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 05:22:12.483377 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:12.483513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 05:22:12.483550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 05:22:12.483582 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:12.483614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 05:22:12.483631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 05:22:12.483650 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:12.483660 | orchestrator | 2026-04-16 05:22:12.483671 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-16 05:22:12.483683 | orchestrator | Thursday 16 April 2026 05:22:09 +0000 (0:00:02.754) 0:01:38.766 ******** 2026-04-16 05:22:12.483694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 05:22:12.483714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 05:22:20.202250 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:20.202363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 05:22:20.202384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 05:22:20.202476 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:20.202498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 05:22:20.202540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 05:22:20.202561 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:20.202582 | orchestrator | 2026-04-16 05:22:20.202604 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-16 05:22:20.202622 | orchestrator | Thursday 16 April 2026 05:22:12 +0000 (0:00:03.241) 0:01:42.008 ******** 2026-04-16 05:22:20.202656 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:20.202667 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:20.202678 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:20.202688 | orchestrator | 2026-04-16 05:22:20.202706 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-16 05:22:20.202732 | orchestrator | Thursday 16 April 2026 05:22:13 +0000 (0:00:01.227) 0:01:43.236 ******** 2026-04-16 05:22:20.202752 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:20.202769 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:20.202786 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:20.202804 | orchestrator | 2026-04-16 05:22:20.202823 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-16 05:22:20.202841 | orchestrator | Thursday 16 April 2026 05:22:15 +0000 (0:00:01.904) 0:01:45.140 ******** 2026-04-16 05:22:20.202860 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:20.202879 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:20.202896 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:20.202916 | orchestrator | 2026-04-16 05:22:20.202936 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-16 05:22:20.202956 | orchestrator | Thursday 16 April 2026 05:22:15 +0000 (0:00:00.293) 0:01:45.434 ******** 2026-04-16 05:22:20.202972 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:22:20.202984 | orchestrator | 2026-04-16 05:22:20.202997 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-16 05:22:20.203010 | orchestrator | Thursday 16 April 2026 05:22:16 +0000 (0:00:00.967) 0:01:46.401 ******** 2026-04-16 05:22:20.203045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 05:22:20.203063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 05:22:20.203076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 05:22:20.203088 | orchestrator | 2026-04-16 05:22:20.203099 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-16 05:22:20.203110 | orchestrator | Thursday 16 April 2026 05:22:19 +0000 (0:00:02.790) 0:01:49.192 ******** 2026-04-16 05:22:20.203134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 05:22:20.203147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 05:22:20.203158 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:20.203169 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:20.203181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 05:22:20.203267 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:20.203288 | orchestrator | 2026-04-16 05:22:20.203299 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-16 05:22:20.203311 | orchestrator | Thursday 16 April 2026 05:22:20 +0000 (0:00:00.359) 0:01:49.551 ******** 2026-04-16 05:22:20.203323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-16 05:22:20.203345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-16 05:22:28.290896 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:28.291005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-16 05:22:28.291023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-16 05:22:28.291038 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:28.291050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-04-16 05:22:28.291061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-04-16 05:22:28.291095 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:28.291107 | orchestrator | 2026-04-16 05:22:28.291119 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-16 05:22:28.291131 | orchestrator | Thursday 16 April 2026 05:22:20 +0000 (0:00:00.766) 0:01:50.318 ******** 2026-04-16 05:22:28.291143 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:28.291154 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:28.291165 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:28.291176 | orchestrator | 2026-04-16 05:22:28.291187 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-16 05:22:28.291199 | orchestrator | Thursday 16 April 2026 05:22:22 +0000 (0:00:01.270) 0:01:51.588 ******** 2026-04-16 05:22:28.291210 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:28.291221 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:28.291232 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:28.291243 | orchestrator | 2026-04-16 05:22:28.291254 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-16 05:22:28.291279 | orchestrator | Thursday 16 April 2026 05:22:24 +0000 (0:00:01.971) 0:01:53.560 ******** 2026-04-16 05:22:28.291290 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:28.291301 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:28.291311 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:28.291322 | orchestrator | 2026-04-16 05:22:28.291332 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-16 05:22:28.291343 | orchestrator | Thursday 16 April 2026 05:22:24 +0000 (0:00:00.275) 0:01:53.836 ******** 2026-04-16 05:22:28.291354 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:22:28.291365 | orchestrator | 2026-04-16 05:22:28.291376 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-16 05:22:28.291386 | orchestrator | Thursday 16 April 2026 05:22:25 +0000 (0:00:01.013) 0:01:54.850 ******** 2026-04-16 05:22:28.291452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 05:22:28.291489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 05:22:28.291516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 05:22:29.793684 | orchestrator | 2026-04-16 05:22:29.793785 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-16 05:22:29.793802 | orchestrator | Thursday 16 April 2026 05:22:28 +0000 (0:00:02.965) 0:01:57.815 ******** 2026-04-16 05:22:29.793840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 05:22:29.793857 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:29.793893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 05:22:29.793929 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:29.793949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 05:22:29.793962 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:29.793973 | orchestrator | 2026-04-16 05:22:29.793985 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-16 05:22:29.793996 | orchestrator | Thursday 16 April 2026 05:22:28 +0000 (0:00:00.606) 0:01:58.422 ******** 2026-04-16 05:22:29.794009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-16 05:22:29.794106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 05:22:29.794122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-16 05:22:29.794144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 05:22:37.777468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-16 05:22:37.777603 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:37.777632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-16 05:22:37.777657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 05:22:37.777699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-16 05:22:37.777722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 05:22:37.777744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-16 05:22:37.777765 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:37.777784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-16 05:22:37.777804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 05:22:37.777824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-04-16 05:22:37.777876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 05:22:37.777898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-16 05:22:37.777919 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:37.777938 | orchestrator | 2026-04-16 05:22:37.777960 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-16 05:22:37.777983 | orchestrator | Thursday 16 April 2026 05:22:29 +0000 (0:00:00.891) 0:01:59.313 ******** 2026-04-16 05:22:37.778003 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:37.778103 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:37.778126 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:37.778145 | orchestrator | 2026-04-16 05:22:37.778163 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-16 05:22:37.778182 | orchestrator | Thursday 16 April 2026 05:22:31 +0000 (0:00:01.515) 0:02:00.828 ******** 2026-04-16 05:22:37.778203 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:37.778223 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:37.778242 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:37.778261 | orchestrator | 2026-04-16 05:22:37.778279 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-16 05:22:37.778298 | orchestrator | Thursday 16 April 2026 05:22:32 +0000 (0:00:01.664) 0:02:02.493 ******** 2026-04-16 05:22:37.778318 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:37.778336 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:37.778380 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:37.778428 | orchestrator | 2026-04-16 05:22:37.778448 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-16 05:22:37.778466 | orchestrator | Thursday 16 April 2026 05:22:33 +0000 (0:00:00.286) 0:02:02.780 ******** 2026-04-16 05:22:37.778485 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:37.778503 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:37.778521 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:37.778539 | orchestrator | 2026-04-16 05:22:37.778558 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-16 05:22:37.778577 | orchestrator | Thursday 16 April 2026 05:22:33 +0000 (0:00:00.272) 0:02:03.053 ******** 2026-04-16 05:22:37.778596 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:22:37.778613 | orchestrator | 2026-04-16 05:22:37.778632 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-16 05:22:37.778650 | orchestrator | Thursday 16 April 2026 05:22:34 +0000 (0:00:01.091) 0:02:04.144 ******** 2026-04-16 05:22:37.778687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:22:37.778731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:22:37.778753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:22:37.778772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:22:37.778805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:22:38.398818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:22:38.398922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:22:38.398962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:22:38.398974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:22:38.398986 | orchestrator | 2026-04-16 05:22:38.399000 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-16 05:22:38.399012 | orchestrator | Thursday 16 April 2026 05:22:37 +0000 (0:00:03.156) 0:02:07.301 ******** 2026-04-16 05:22:38.399043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:22:38.399063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:22:38.399075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:22:38.399094 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:38.399107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:22:38.399119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:22:38.399131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:22:38.399143 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:38.399166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:22:47.038276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:22:47.038458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:22:47.038480 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:47.038494 | orchestrator | 2026-04-16 05:22:47.038506 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-16 05:22:47.038519 | orchestrator | Thursday 16 April 2026 05:22:38 +0000 (0:00:00.621) 0:02:07.922 ******** 2026-04-16 05:22:47.038532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-16 05:22:47.038547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-16 05:22:47.038559 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:47.038571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-16 05:22:47.038583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-16 05:22:47.038594 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:47.038606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-16 05:22:47.038617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-04-16 05:22:47.038629 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:47.038640 | orchestrator | 2026-04-16 05:22:47.038651 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-16 05:22:47.038662 | orchestrator | Thursday 16 April 2026 05:22:39 +0000 (0:00:00.971) 0:02:08.893 ******** 2026-04-16 05:22:47.038673 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:47.038684 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:47.038728 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:47.038740 | orchestrator | 2026-04-16 05:22:47.038751 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-16 05:22:47.038762 | orchestrator | Thursday 16 April 2026 05:22:40 +0000 (0:00:01.253) 0:02:10.147 ******** 2026-04-16 05:22:47.038773 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:47.038784 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:47.038795 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:47.038806 | orchestrator | 2026-04-16 05:22:47.038817 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-16 05:22:47.038828 | orchestrator | Thursday 16 April 2026 05:22:42 +0000 (0:00:01.928) 0:02:12.075 ******** 2026-04-16 05:22:47.038839 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:47.038850 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:47.038876 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:47.038903 | orchestrator | 2026-04-16 05:22:47.038926 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-16 05:22:47.038956 | orchestrator | Thursday 16 April 2026 05:22:42 +0000 (0:00:00.312) 0:02:12.387 ******** 2026-04-16 05:22:47.038968 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:22:47.038979 | orchestrator | 2026-04-16 05:22:47.038990 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-16 05:22:47.039001 | orchestrator | Thursday 16 April 2026 05:22:43 +0000 (0:00:01.144) 0:02:13.532 ******** 2026-04-16 05:22:47.039014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 05:22:47.039030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:22:47.039042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 05:22:47.039063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:22:47.039085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 05:22:51.939882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:22:51.939992 | orchestrator | 2026-04-16 05:22:51.940009 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-16 05:22:51.940022 | orchestrator | Thursday 16 April 2026 05:22:47 +0000 (0:00:03.032) 0:02:16.564 ******** 2026-04-16 05:22:51.940037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 05:22:51.940094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:22:51.940131 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:51.940150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 05:22:51.940182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:22:51.940195 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:51.940206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 05:22:51.940218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:22:51.940237 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:51.940248 | orchestrator | 2026-04-16 05:22:51.940260 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-16 05:22:51.940271 | orchestrator | Thursday 16 April 2026 05:22:47 +0000 (0:00:00.602) 0:02:17.167 ******** 2026-04-16 05:22:51.940284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-16 05:22:51.940297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-16 05:22:51.940310 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:51.940321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-16 05:22:51.940332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-16 05:22:51.940343 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:51.940354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-04-16 05:22:51.940365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-04-16 05:22:51.940376 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:22:51.940387 | orchestrator | 2026-04-16 05:22:51.940487 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-16 05:22:51.940502 | orchestrator | Thursday 16 April 2026 05:22:48 +0000 (0:00:00.824) 0:02:17.992 ******** 2026-04-16 05:22:51.940515 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:51.940528 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:51.940541 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:51.940554 | orchestrator | 2026-04-16 05:22:51.940567 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-16 05:22:51.940580 | orchestrator | Thursday 16 April 2026 05:22:49 +0000 (0:00:01.529) 0:02:19.522 ******** 2026-04-16 05:22:51.940593 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:22:51.940606 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:22:51.940618 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:22:51.940632 | orchestrator | 2026-04-16 05:22:51.940644 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-16 05:22:51.940665 | orchestrator | Thursday 16 April 2026 05:22:51 +0000 (0:00:01.936) 0:02:21.458 ******** 2026-04-16 05:22:56.190275 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:22:56.190388 | orchestrator | 2026-04-16 05:22:56.190478 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-16 05:22:56.190499 | orchestrator | Thursday 16 April 2026 05:22:52 +0000 (0:00:00.996) 0:02:22.454 ******** 2026-04-16 05:22:56.190522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 05:22:56.190577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 05:22:56.190703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 05:22:56.190796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 05:22:56.190859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.128694 | orchestrator | 2026-04-16 05:22:57.128796 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-16 05:22:57.128812 | orchestrator | Thursday 16 April 2026 05:22:56 +0000 (0:00:03.338) 0:02:25.793 ******** 2026-04-16 05:22:57.128856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 05:22:57.128880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.128902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.128922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.128941 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:22:57.128981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 05:22:57.129027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.129050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.129061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.129072 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:22:57.129083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 05:22:57.129095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.129111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 05:22:57.129131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 05:23:07.890128 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:07.890238 | orchestrator | 2026-04-16 05:23:07.890254 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-16 05:23:07.890267 | orchestrator | Thursday 16 April 2026 05:22:57 +0000 (0:00:00.946) 0:02:26.740 ******** 2026-04-16 05:23:07.890280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-16 05:23:07.890308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-16 05:23:07.890332 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:07.890345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-16 05:23:07.890356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-16 05:23:07.890367 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:07.890378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-04-16 05:23:07.890389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-04-16 05:23:07.890443 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:07.890454 | orchestrator | 2026-04-16 05:23:07.890465 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-16 05:23:07.890476 | orchestrator | Thursday 16 April 2026 05:22:58 +0000 (0:00:00.816) 0:02:27.557 ******** 2026-04-16 05:23:07.890487 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:07.890498 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:07.890509 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:07.890519 | orchestrator | 2026-04-16 05:23:07.890530 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-16 05:23:07.890541 | orchestrator | Thursday 16 April 2026 05:22:59 +0000 (0:00:01.190) 0:02:28.747 ******** 2026-04-16 05:23:07.890552 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:07.890563 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:07.890574 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:07.890584 | orchestrator | 2026-04-16 05:23:07.890595 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-16 05:23:07.890609 | orchestrator | Thursday 16 April 2026 05:23:01 +0000 (0:00:02.006) 0:02:30.754 ******** 2026-04-16 05:23:07.890622 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:23:07.890634 | orchestrator | 2026-04-16 05:23:07.890646 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-16 05:23:07.890659 | orchestrator | Thursday 16 April 2026 05:23:02 +0000 (0:00:01.213) 0:02:31.968 ******** 2026-04-16 05:23:07.890671 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 05:23:07.890684 | orchestrator | 2026-04-16 05:23:07.890696 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-16 05:23:07.890798 | orchestrator | Thursday 16 April 2026 05:23:05 +0000 (0:00:03.036) 0:02:35.004 ******** 2026-04-16 05:23:07.890878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:23:07.890910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 05:23:07.890931 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:07.890953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:23:07.890976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 05:23:07.890988 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:07.891010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:23:09.988593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 05:23:09.988705 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:09.988724 | orchestrator | 2026-04-16 05:23:09.988737 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-16 05:23:09.988749 | orchestrator | Thursday 16 April 2026 05:23:07 +0000 (0:00:02.405) 0:02:37.409 ******** 2026-04-16 05:23:09.988801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:23:09.988818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 05:23:09.988829 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:09.988862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:23:09.988894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 05:23:09.988907 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:09.988919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:23:09.988938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 05:23:18.103673 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:18.103800 | orchestrator | 2026-04-16 05:23:18.103819 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-16 05:23:18.103834 | orchestrator | Thursday 16 April 2026 05:23:09 +0000 (0:00:02.105) 0:02:39.515 ******** 2026-04-16 05:23:18.103849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 05:23:18.103890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 05:23:18.103917 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:18.103929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 05:23:18.103941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 05:23:18.103952 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:18.103964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 05:23:18.103975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 05:23:18.103986 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:18.103997 | orchestrator | 2026-04-16 05:23:18.104008 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-16 05:23:18.104019 | orchestrator | Thursday 16 April 2026 05:23:12 +0000 (0:00:02.049) 0:02:41.564 ******** 2026-04-16 05:23:18.104031 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:18.104060 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:18.104080 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:18.104091 | orchestrator | 2026-04-16 05:23:18.104102 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-16 05:23:18.104113 | orchestrator | Thursday 16 April 2026 05:23:13 +0000 (0:00:01.771) 0:02:43.336 ******** 2026-04-16 05:23:18.104124 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:18.104134 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:18.104145 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:18.104156 | orchestrator | 2026-04-16 05:23:18.104167 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-16 05:23:18.104180 | orchestrator | Thursday 16 April 2026 05:23:15 +0000 (0:00:01.349) 0:02:44.686 ******** 2026-04-16 05:23:18.104193 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:18.104205 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:18.104217 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:18.104231 | orchestrator | 2026-04-16 05:23:18.104243 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-16 05:23:18.104255 | orchestrator | Thursday 16 April 2026 05:23:15 +0000 (0:00:00.259) 0:02:44.945 ******** 2026-04-16 05:23:18.104268 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:23:18.104280 | orchestrator | 2026-04-16 05:23:18.104292 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-16 05:23:18.104305 | orchestrator | Thursday 16 April 2026 05:23:16 +0000 (0:00:01.105) 0:02:46.050 ******** 2026-04-16 05:23:18.104324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 05:23:18.104341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 05:23:18.104355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 05:23:18.104367 | orchestrator | 2026-04-16 05:23:18.104380 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-16 05:23:18.104427 | orchestrator | Thursday 16 April 2026 05:23:17 +0000 (0:00:01.391) 0:02:47.442 ******** 2026-04-16 05:23:18.104449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 05:23:26.038684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 05:23:26.038798 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:26.038817 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:26.038830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 05:23:26.038842 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:26.038853 | orchestrator | 2026-04-16 05:23:26.038865 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-16 05:23:26.038877 | orchestrator | Thursday 16 April 2026 05:23:18 +0000 (0:00:00.363) 0:02:47.806 ******** 2026-04-16 05:23:26.038890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-16 05:23:26.038902 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:26.038914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-16 05:23:26.038925 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:26.038935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-16 05:23:26.038971 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:26.038983 | orchestrator | 2026-04-16 05:23:26.039034 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-16 05:23:26.039046 | orchestrator | Thursday 16 April 2026 05:23:19 +0000 (0:00:00.781) 0:02:48.588 ******** 2026-04-16 05:23:26.039057 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:26.039068 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:26.039079 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:26.039090 | orchestrator | 2026-04-16 05:23:26.039101 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-16 05:23:26.039111 | orchestrator | Thursday 16 April 2026 05:23:19 +0000 (0:00:00.439) 0:02:49.027 ******** 2026-04-16 05:23:26.039122 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:26.039133 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:26.039144 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:26.039154 | orchestrator | 2026-04-16 05:23:26.039165 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-16 05:23:26.039176 | orchestrator | Thursday 16 April 2026 05:23:20 +0000 (0:00:01.201) 0:02:50.229 ******** 2026-04-16 05:23:26.039187 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:26.039198 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:26.039212 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:26.039224 | orchestrator | 2026-04-16 05:23:26.039237 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-16 05:23:26.039250 | orchestrator | Thursday 16 April 2026 05:23:21 +0000 (0:00:00.306) 0:02:50.535 ******** 2026-04-16 05:23:26.039263 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:23:26.039275 | orchestrator | 2026-04-16 05:23:26.039288 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-16 05:23:26.039301 | orchestrator | Thursday 16 April 2026 05:23:22 +0000 (0:00:01.359) 0:02:51.895 ******** 2026-04-16 05:23:26.039340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 05:23:26.039371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.039392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.039458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.039480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-16 05:23:26.039513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.111297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.111457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.111480 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.111515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 05:23:26.111530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:26.111542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.111573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.111592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-16 05:23:26.111611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.111622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 05:23:26.111633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.111654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.208692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.208811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.208866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-16 05:23:26.208889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 05:23:26.208910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.208956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.208988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:26.209022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.209042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.209055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.209069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-16 05:23:26.209100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.437144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.437264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:26.437282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.437296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.437308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.437321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-16 05:23:26.437353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.437386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:26.437517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.437533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:26.437546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 05:23:26.437561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:26.437582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:27.422527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-16 05:23:27.422640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.422657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.422677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 05:23:27.422701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:27.422720 | orchestrator | 2026-04-16 05:23:27.422740 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-16 05:23:27.422787 | orchestrator | Thursday 16 April 2026 05:23:26 +0000 (0:00:04.063) 0:02:55.959 ******** 2026-04-16 05:23:27.422841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 05:23:27.422868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.422890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.422907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.422919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-16 05:23:27.422953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.526488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.526510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:27.526537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 05:23:27.526586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-16 05:23:27.526663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.526687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.526732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-16 05:23:27.833370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 05:23:27.833544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.833565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:27.833602 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:27.833617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.833633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.833672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.833717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:27.833738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.833758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-16 05:23:27.833791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:27.833811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:27.833830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 05:23:27.833865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:28.067543 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:28.067640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 05:23:28.067699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:28.067735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:28.067751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:28.067762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-04-16 05:23:28.067791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:28.067804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:28.067824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:28.067836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:28.067857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:28.067868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:28.067886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-04-16 05:23:37.805316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-04-16 05:23:37.805483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 05:23:37.805541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 05:23:37.805583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 05:23:37.805602 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:37.805622 | orchestrator | 2026-04-16 05:23:37.805641 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-16 05:23:37.805659 | orchestrator | Thursday 16 April 2026 05:23:28 +0000 (0:00:01.635) 0:02:57.594 ******** 2026-04-16 05:23:37.805677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-16 05:23:37.805697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-16 05:23:37.805716 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:37.805734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-16 05:23:37.805752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-16 05:23:37.805769 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:37.805812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-04-16 05:23:37.805825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-04-16 05:23:37.805846 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:37.805859 | orchestrator | 2026-04-16 05:23:37.805872 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-16 05:23:37.805885 | orchestrator | Thursday 16 April 2026 05:23:29 +0000 (0:00:01.907) 0:02:59.502 ******** 2026-04-16 05:23:37.805898 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:37.805910 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:37.805922 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:37.805934 | orchestrator | 2026-04-16 05:23:37.805947 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-16 05:23:37.805959 | orchestrator | Thursday 16 April 2026 05:23:31 +0000 (0:00:01.240) 0:03:00.742 ******** 2026-04-16 05:23:37.805971 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:37.805983 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:37.805996 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:37.806007 | orchestrator | 2026-04-16 05:23:37.806084 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-16 05:23:37.806098 | orchestrator | Thursday 16 April 2026 05:23:33 +0000 (0:00:01.964) 0:03:02.707 ******** 2026-04-16 05:23:37.806111 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:23:37.806123 | orchestrator | 2026-04-16 05:23:37.806135 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-16 05:23:37.806157 | orchestrator | Thursday 16 April 2026 05:23:34 +0000 (0:00:01.208) 0:03:03.915 ******** 2026-04-16 05:23:37.806172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 05:23:37.806197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 05:23:37.806209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 05:23:37.806228 | orchestrator | 2026-04-16 05:23:37.806239 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-16 05:23:37.806259 | orchestrator | Thursday 16 April 2026 05:23:37 +0000 (0:00:03.413) 0:03:07.328 ******** 2026-04-16 05:23:47.923566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 05:23:47.923690 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:47.923707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 05:23:47.923719 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:47.923744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 05:23:47.923756 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:47.923766 | orchestrator | 2026-04-16 05:23:47.923777 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-16 05:23:47.923788 | orchestrator | Thursday 16 April 2026 05:23:38 +0000 (0:00:00.499) 0:03:07.828 ******** 2026-04-16 05:23:47.923799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-16 05:23:47.923833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-16 05:23:47.923845 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:47.923855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-16 05:23:47.923865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-16 05:23:47.923875 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:47.923902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-16 05:23:47.923913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-04-16 05:23:47.923922 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:47.923932 | orchestrator | 2026-04-16 05:23:47.923942 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-16 05:23:47.923952 | orchestrator | Thursday 16 April 2026 05:23:39 +0000 (0:00:00.719) 0:03:08.548 ******** 2026-04-16 05:23:47.923962 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:47.923971 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:47.923981 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:47.923990 | orchestrator | 2026-04-16 05:23:47.924000 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-16 05:23:47.924009 | orchestrator | Thursday 16 April 2026 05:23:40 +0000 (0:00:01.613) 0:03:10.162 ******** 2026-04-16 05:23:47.924019 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:47.924031 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:47.924043 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:47.924054 | orchestrator | 2026-04-16 05:23:47.924066 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-16 05:23:47.924077 | orchestrator | Thursday 16 April 2026 05:23:42 +0000 (0:00:02.015) 0:03:12.177 ******** 2026-04-16 05:23:47.924089 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:23:47.924101 | orchestrator | 2026-04-16 05:23:47.924112 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-16 05:23:47.924123 | orchestrator | Thursday 16 April 2026 05:23:43 +0000 (0:00:01.233) 0:03:13.411 ******** 2026-04-16 05:23:47.924138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 05:23:47.924167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:23:47.924181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:23:47.924203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 05:23:48.851933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:23:48.852043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:23:48.852082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 05:23:48.852120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:23:48.852132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:23:48.852144 | orchestrator | 2026-04-16 05:23:48.852157 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-16 05:23:48.852170 | orchestrator | Thursday 16 April 2026 05:23:47 +0000 (0:00:04.034) 0:03:17.445 ******** 2026-04-16 05:23:48.852204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 05:23:48.852217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:23:48.852243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:23:48.852257 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:48.852270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 05:23:48.852289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:23:59.515140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:23:59.515273 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:59.515325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 05:23:59.515381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 05:23:59.515488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 05:23:59.515511 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:59.515529 | orchestrator | 2026-04-16 05:23:59.515547 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-16 05:23:59.515568 | orchestrator | Thursday 16 April 2026 05:23:48 +0000 (0:00:00.928) 0:03:18.374 ******** 2026-04-16 05:23:59.515590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515703 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:23:59.515723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515784 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:23:59.515795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-04-16 05:23:59.515847 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:23:59.515858 | orchestrator | 2026-04-16 05:23:59.515869 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-16 05:23:59.515880 | orchestrator | Thursday 16 April 2026 05:23:49 +0000 (0:00:00.824) 0:03:19.199 ******** 2026-04-16 05:23:59.515891 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:59.515902 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:59.515913 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:59.515924 | orchestrator | 2026-04-16 05:23:59.515934 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-16 05:23:59.515945 | orchestrator | Thursday 16 April 2026 05:23:51 +0000 (0:00:01.384) 0:03:20.583 ******** 2026-04-16 05:23:59.515956 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:23:59.515967 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:23:59.515978 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:23:59.515988 | orchestrator | 2026-04-16 05:23:59.515999 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-16 05:23:59.516010 | orchestrator | Thursday 16 April 2026 05:23:53 +0000 (0:00:02.001) 0:03:22.584 ******** 2026-04-16 05:23:59.516021 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:23:59.516032 | orchestrator | 2026-04-16 05:23:59.516043 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-16 05:23:59.516053 | orchestrator | Thursday 16 April 2026 05:23:54 +0000 (0:00:01.471) 0:03:24.055 ******** 2026-04-16 05:23:59.516064 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-16 05:23:59.516077 | orchestrator | 2026-04-16 05:23:59.516088 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-16 05:23:59.516098 | orchestrator | Thursday 16 April 2026 05:23:55 +0000 (0:00:00.807) 0:03:24.863 ******** 2026-04-16 05:23:59.516111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-16 05:23:59.516141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-16 05:24:11.191088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-16 05:24:11.191204 | orchestrator | 2026-04-16 05:24:11.191223 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-16 05:24:11.191236 | orchestrator | Thursday 16 April 2026 05:23:59 +0000 (0:00:04.171) 0:03:29.035 ******** 2026-04-16 05:24:11.191250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191262 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:11.191291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191303 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:11.191314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191326 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:11.191337 | orchestrator | 2026-04-16 05:24:11.191348 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-16 05:24:11.191359 | orchestrator | Thursday 16 April 2026 05:24:00 +0000 (0:00:00.994) 0:03:30.029 ******** 2026-04-16 05:24:11.191373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 05:24:11.191387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 05:24:11.191484 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:11.191497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 05:24:11.191509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 05:24:11.191520 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:11.191531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 05:24:11.191543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 05:24:11.191572 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:11.191584 | orchestrator | 2026-04-16 05:24:11.191595 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-16 05:24:11.191606 | orchestrator | Thursday 16 April 2026 05:24:01 +0000 (0:00:01.447) 0:03:31.477 ******** 2026-04-16 05:24:11.191619 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:24:11.191632 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:24:11.191645 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:24:11.191657 | orchestrator | 2026-04-16 05:24:11.191670 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-16 05:24:11.191683 | orchestrator | Thursday 16 April 2026 05:24:04 +0000 (0:00:02.437) 0:03:33.914 ******** 2026-04-16 05:24:11.191696 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:24:11.191708 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:24:11.191720 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:24:11.191732 | orchestrator | 2026-04-16 05:24:11.191745 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-16 05:24:11.191758 | orchestrator | Thursday 16 April 2026 05:24:07 +0000 (0:00:02.897) 0:03:36.811 ******** 2026-04-16 05:24:11.191772 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-16 05:24:11.191786 | orchestrator | 2026-04-16 05:24:11.191798 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-16 05:24:11.191811 | orchestrator | Thursday 16 April 2026 05:24:08 +0000 (0:00:01.459) 0:03:38.271 ******** 2026-04-16 05:24:11.191831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191846 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:11.191859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191880 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:11.191894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191908 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:11.191920 | orchestrator | 2026-04-16 05:24:11.191932 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-16 05:24:11.191945 | orchestrator | Thursday 16 April 2026 05:24:09 +0000 (0:00:01.199) 0:03:39.471 ******** 2026-04-16 05:24:11.191959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.191972 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:11.191985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:11.192005 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:32.873569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 05:24:32.873716 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:32.873746 | orchestrator | 2026-04-16 05:24:32.873768 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-16 05:24:32.873789 | orchestrator | Thursday 16 April 2026 05:24:11 +0000 (0:00:01.242) 0:03:40.713 ******** 2026-04-16 05:24:32.873809 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:32.873827 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:32.873846 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:32.873864 | orchestrator | 2026-04-16 05:24:32.873880 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-16 05:24:32.873899 | orchestrator | Thursday 16 April 2026 05:24:12 +0000 (0:00:01.762) 0:03:42.476 ******** 2026-04-16 05:24:32.873916 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:24:32.873934 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:24:32.873951 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:24:32.873967 | orchestrator | 2026-04-16 05:24:32.873984 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-16 05:24:32.874004 | orchestrator | Thursday 16 April 2026 05:24:15 +0000 (0:00:02.309) 0:03:44.785 ******** 2026-04-16 05:24:32.874148 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:24:32.874173 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:24:32.874193 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:24:32.874205 | orchestrator | 2026-04-16 05:24:32.874216 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-16 05:24:32.874236 | orchestrator | Thursday 16 April 2026 05:24:17 +0000 (0:00:02.691) 0:03:47.477 ******** 2026-04-16 05:24:32.874250 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-16 05:24:32.874271 | orchestrator | 2026-04-16 05:24:32.874288 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-16 05:24:32.874306 | orchestrator | Thursday 16 April 2026 05:24:19 +0000 (0:00:01.139) 0:03:48.616 ******** 2026-04-16 05:24:32.874325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 05:24:32.874346 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:32.874365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 05:24:32.874384 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:32.874430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 05:24:32.874450 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:32.874469 | orchestrator | 2026-04-16 05:24:32.874487 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-16 05:24:32.874507 | orchestrator | Thursday 16 April 2026 05:24:20 +0000 (0:00:01.245) 0:03:49.862 ******** 2026-04-16 05:24:32.874554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 05:24:32.874575 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:32.874595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 05:24:32.874627 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:32.874646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 05:24:32.874666 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:32.874685 | orchestrator | 2026-04-16 05:24:32.874711 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-16 05:24:32.874733 | orchestrator | Thursday 16 April 2026 05:24:21 +0000 (0:00:01.232) 0:03:51.095 ******** 2026-04-16 05:24:32.874751 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:32.874768 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:32.874788 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:32.874805 | orchestrator | 2026-04-16 05:24:32.874824 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-16 05:24:32.874841 | orchestrator | Thursday 16 April 2026 05:24:22 +0000 (0:00:01.418) 0:03:52.514 ******** 2026-04-16 05:24:32.874860 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:24:32.874878 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:24:32.874897 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:24:32.874915 | orchestrator | 2026-04-16 05:24:32.874933 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-16 05:24:32.874952 | orchestrator | Thursday 16 April 2026 05:24:25 +0000 (0:00:02.245) 0:03:54.759 ******** 2026-04-16 05:24:32.874970 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:24:32.874989 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:24:32.875006 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:24:32.875024 | orchestrator | 2026-04-16 05:24:32.875044 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-16 05:24:32.875062 | orchestrator | Thursday 16 April 2026 05:24:28 +0000 (0:00:03.134) 0:03:57.894 ******** 2026-04-16 05:24:32.875080 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:24:32.875098 | orchestrator | 2026-04-16 05:24:32.875117 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-16 05:24:32.875134 | orchestrator | Thursday 16 April 2026 05:24:29 +0000 (0:00:01.526) 0:03:59.421 ******** 2026-04-16 05:24:32.875155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 05:24:32.875175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 05:24:32.875220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.561890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 05:24:33.561997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.562070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 05:24:33.562086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:24:33.562099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.562132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.562164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 05:24:33.562176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:24:33.562188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 05:24:33.562200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.562211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.562265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:24:33.562279 | orchestrator | 2026-04-16 05:24:33.562292 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-16 05:24:33.562304 | orchestrator | Thursday 16 April 2026 05:24:32 +0000 (0:00:03.109) 0:04:02.530 ******** 2026-04-16 05:24:33.562327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 05:24:33.686547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 05:24:33.686645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.686660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.686671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:24:33.686705 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:33.686719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 05:24:33.686732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 05:24:33.686769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.686782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.686794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:24:33.686813 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:33.686825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 05:24:33.686837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 05:24:33.686848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 05:24:33.686874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 05:24:45.180071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 05:24:45.180188 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:45.180205 | orchestrator | 2026-04-16 05:24:45.180218 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-16 05:24:45.180230 | orchestrator | Thursday 16 April 2026 05:24:33 +0000 (0:00:00.685) 0:04:03.216 ******** 2026-04-16 05:24:45.180242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 05:24:45.180280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 05:24:45.180295 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:45.180306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 05:24:45.180317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 05:24:45.180328 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:45.180339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 05:24:45.180350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 05:24:45.180361 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:45.180371 | orchestrator | 2026-04-16 05:24:45.180382 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-16 05:24:45.180393 | orchestrator | Thursday 16 April 2026 05:24:34 +0000 (0:00:01.092) 0:04:04.309 ******** 2026-04-16 05:24:45.180455 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:24:45.180467 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:24:45.180478 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:24:45.180488 | orchestrator | 2026-04-16 05:24:45.180499 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-16 05:24:45.180510 | orchestrator | Thursday 16 April 2026 05:24:36 +0000 (0:00:01.375) 0:04:05.684 ******** 2026-04-16 05:24:45.180521 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:24:45.180531 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:24:45.180542 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:24:45.180553 | orchestrator | 2026-04-16 05:24:45.180564 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-16 05:24:45.180575 | orchestrator | Thursday 16 April 2026 05:24:38 +0000 (0:00:02.164) 0:04:07.849 ******** 2026-04-16 05:24:45.180586 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:24:45.180598 | orchestrator | 2026-04-16 05:24:45.180609 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-16 05:24:45.180621 | orchestrator | Thursday 16 April 2026 05:24:39 +0000 (0:00:01.373) 0:04:09.223 ******** 2026-04-16 05:24:45.180649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:24:45.180688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:24:45.180713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:24:45.180729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:24:45.180750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:24:45.180773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:24:46.715082 | orchestrator | 2026-04-16 05:24:46.715187 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-16 05:24:46.715204 | orchestrator | Thursday 16 April 2026 05:24:45 +0000 (0:00:05.472) 0:04:14.695 ******** 2026-04-16 05:24:46.715219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:24:46.715237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:24:46.715251 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:46.715264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:24:46.715294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:24:46.715373 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:46.715389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:24:46.715429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:24:46.715441 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:46.715452 | orchestrator | 2026-04-16 05:24:46.715464 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-16 05:24:46.715475 | orchestrator | Thursday 16 April 2026 05:24:45 +0000 (0:00:00.660) 0:04:15.355 ******** 2026-04-16 05:24:46.715488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-16 05:24:46.715501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-16 05:24:46.715515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-16 05:24:46.715537 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:46.715554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-16 05:24:46.715566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-16 05:24:46.715577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-16 05:24:46.715591 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:46.715604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-04-16 05:24:46.715617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-16 05:24:46.715645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-04-16 05:24:52.607994 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:52.608078 | orchestrator | 2026-04-16 05:24:52.608087 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-16 05:24:52.608095 | orchestrator | Thursday 16 April 2026 05:24:46 +0000 (0:00:00.879) 0:04:16.234 ******** 2026-04-16 05:24:52.608101 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:52.608107 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:52.608113 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:52.608119 | orchestrator | 2026-04-16 05:24:52.608125 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-16 05:24:52.608131 | orchestrator | Thursday 16 April 2026 05:24:47 +0000 (0:00:00.423) 0:04:16.658 ******** 2026-04-16 05:24:52.608136 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:52.608142 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:52.608147 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:52.608153 | orchestrator | 2026-04-16 05:24:52.608159 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-16 05:24:52.608164 | orchestrator | Thursday 16 April 2026 05:24:48 +0000 (0:00:01.428) 0:04:18.086 ******** 2026-04-16 05:24:52.608170 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:24:52.608176 | orchestrator | 2026-04-16 05:24:52.608181 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-16 05:24:52.608187 | orchestrator | Thursday 16 April 2026 05:24:50 +0000 (0:00:01.660) 0:04:19.747 ******** 2026-04-16 05:24:52.608195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 05:24:52.608223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 05:24:52.608242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:52.608249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:52.608257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 05:24:52.608275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 05:24:52.608281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 05:24:52.608287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:52.608297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:52.608306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 05:24:52.608312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 05:24:52.608318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 05:24:52.608328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.104641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.104758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 05:24:54.104800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 05:24:54.104834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-16 05:24:54.104848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.104869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.104911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 05:24:54.104932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 05:24:54.104963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-16 05:24:54.104991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.105012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.105045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 05:24:54.744305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 05:24:54.744536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-16 05:24:54.744571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.744608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.744628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 05:24:54.744647 | orchestrator | 2026-04-16 05:24:54.744667 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-16 05:24:54.744686 | orchestrator | Thursday 16 April 2026 05:24:54 +0000 (0:00:04.017) 0:04:23.765 ******** 2026-04-16 05:24:54.744707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-16 05:24:54.744755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 05:24:54.744786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.744798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:54.744810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 05:24:54.744835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-16 05:24:54.744852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-16 05:24:54.744874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-16 05:24:55.247326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 05:24:55.247483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:55.247501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:55.247532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:55.247545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:55.247557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 05:24:55.247570 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:55.247584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 05:24:55.247645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-16 05:24:55.247662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-16 05:24:55.247679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:55.247691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:55.247702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 05:24:55.247714 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:55.247726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-16 05:24:55.247755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 05:24:56.715908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:56.716013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:56.716045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 05:24:56.716059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-16 05:24:56.716073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-04-16 05:24:56.716102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:56.716128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 05:24:56.716137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 05:24:56.716147 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:24:56.716157 | orchestrator | 2026-04-16 05:24:56.716167 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-16 05:24:56.716177 | orchestrator | Thursday 16 April 2026 05:24:55 +0000 (0:00:01.494) 0:04:25.259 ******** 2026-04-16 05:24:56.716191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-16 05:24:56.716204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-16 05:24:56.716215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-16 05:24:56.716227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-16 05:24:56.716238 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:24:56.716246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-16 05:24:56.716263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-16 05:24:56.716272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-16 05:24:56.716281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-16 05:24:56.716289 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:24:56.716298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-04-16 05:24:56.716307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-04-16 05:24:56.716315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-16 05:24:56.716330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-04-16 05:25:04.092950 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:04.093069 | orchestrator | 2026-04-16 05:25:04.093086 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-16 05:25:04.093101 | orchestrator | Thursday 16 April 2026 05:24:56 +0000 (0:00:00.973) 0:04:26.233 ******** 2026-04-16 05:25:04.093112 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:04.093123 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:04.093134 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:04.093144 | orchestrator | 2026-04-16 05:25:04.093156 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-16 05:25:04.093167 | orchestrator | Thursday 16 April 2026 05:24:57 +0000 (0:00:00.430) 0:04:26.664 ******** 2026-04-16 05:25:04.093178 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:04.093188 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:04.093199 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:04.093210 | orchestrator | 2026-04-16 05:25:04.093221 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-16 05:25:04.093232 | orchestrator | Thursday 16 April 2026 05:24:58 +0000 (0:00:01.271) 0:04:27.935 ******** 2026-04-16 05:25:04.093243 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:25:04.093254 | orchestrator | 2026-04-16 05:25:04.093265 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-16 05:25:04.093276 | orchestrator | Thursday 16 April 2026 05:25:00 +0000 (0:00:01.663) 0:04:29.599 ******** 2026-04-16 05:25:04.093291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:25:04.093335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:25:04.093392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:25:04.093441 | orchestrator | 2026-04-16 05:25:04.093453 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-16 05:25:04.093483 | orchestrator | Thursday 16 April 2026 05:25:02 +0000 (0:00:02.286) 0:04:31.885 ******** 2026-04-16 05:25:04.093498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 05:25:04.093529 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:04.093543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 05:25:04.093558 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:04.093571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 05:25:04.093585 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:04.093598 | orchestrator | 2026-04-16 05:25:04.093611 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-16 05:25:04.093625 | orchestrator | Thursday 16 April 2026 05:25:02 +0000 (0:00:00.400) 0:04:32.285 ******** 2026-04-16 05:25:04.093640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-16 05:25:04.093655 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:04.093668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-16 05:25:04.093682 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:04.093695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-16 05:25:04.093707 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:04.093720 | orchestrator | 2026-04-16 05:25:04.093733 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-16 05:25:04.093747 | orchestrator | Thursday 16 April 2026 05:25:03 +0000 (0:00:00.893) 0:04:33.179 ******** 2026-04-16 05:25:04.093767 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:13.339927 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:13.340057 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:13.340083 | orchestrator | 2026-04-16 05:25:13.340102 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-16 05:25:13.340123 | orchestrator | Thursday 16 April 2026 05:25:04 +0000 (0:00:00.442) 0:04:33.621 ******** 2026-04-16 05:25:13.340140 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:13.340185 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:13.340203 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:13.340219 | orchestrator | 2026-04-16 05:25:13.340236 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-16 05:25:13.340253 | orchestrator | Thursday 16 April 2026 05:25:05 +0000 (0:00:01.279) 0:04:34.901 ******** 2026-04-16 05:25:13.340269 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:25:13.340285 | orchestrator | 2026-04-16 05:25:13.340301 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-16 05:25:13.340317 | orchestrator | Thursday 16 April 2026 05:25:06 +0000 (0:00:01.466) 0:04:36.367 ******** 2026-04-16 05:25:13.340355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 05:25:13.340379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 05:25:13.340426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 05:25:13.340470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 05:25:13.340511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 05:25:13.340529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 05:25:13.340547 | orchestrator | 2026-04-16 05:25:13.340566 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-16 05:25:13.340582 | orchestrator | Thursday 16 April 2026 05:25:12 +0000 (0:00:05.881) 0:04:42.249 ******** 2026-04-16 05:25:13.340598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 05:25:13.340627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 05:25:19.117386 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:19.117571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 05:25:19.117593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 05:25:19.117607 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:19.117619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 05:25:19.117631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 05:25:19.117661 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:19.117673 | orchestrator | 2026-04-16 05:25:19.117686 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-16 05:25:19.117698 | orchestrator | Thursday 16 April 2026 05:25:13 +0000 (0:00:00.617) 0:04:42.867 ******** 2026-04-16 05:25:19.117729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117786 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:19.117797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117841 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:19.117852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-04-16 05:25:19.117896 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:19.117907 | orchestrator | 2026-04-16 05:25:19.117928 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-16 05:25:19.117947 | orchestrator | Thursday 16 April 2026 05:25:14 +0000 (0:00:00.910) 0:04:43.777 ******** 2026-04-16 05:25:19.117965 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:25:19.117984 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:25:19.118004 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:25:19.118089 | orchestrator | 2026-04-16 05:25:19.118104 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-16 05:25:19.118118 | orchestrator | Thursday 16 April 2026 05:25:16 +0000 (0:00:01.876) 0:04:45.654 ******** 2026-04-16 05:25:19.118130 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:25:19.118143 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:25:19.118156 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:25:19.118168 | orchestrator | 2026-04-16 05:25:19.118182 | orchestrator | TASK [include_role : swift] **************************************************** 2026-04-16 05:25:19.118195 | orchestrator | Thursday 16 April 2026 05:25:17 +0000 (0:00:01.782) 0:04:47.436 ******** 2026-04-16 05:25:19.118207 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:19.118220 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:19.118232 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:19.118245 | orchestrator | 2026-04-16 05:25:19.118258 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-16 05:25:19.118271 | orchestrator | Thursday 16 April 2026 05:25:18 +0000 (0:00:00.620) 0:04:48.057 ******** 2026-04-16 05:25:19.118283 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:19.118296 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:25:19.118308 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:25:19.118321 | orchestrator | 2026-04-16 05:25:19.118333 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-16 05:25:19.118344 | orchestrator | Thursday 16 April 2026 05:25:18 +0000 (0:00:00.292) 0:04:48.349 ******** 2026-04-16 05:25:19.118355 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:25:19.118374 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.351695 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.351847 | orchestrator | 2026-04-16 05:26:00.351877 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-16 05:26:00.351899 | orchestrator | Thursday 16 April 2026 05:25:19 +0000 (0:00:00.297) 0:04:48.647 ******** 2026-04-16 05:26:00.351920 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.351939 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.351957 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.351976 | orchestrator | 2026-04-16 05:26:00.351994 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-16 05:26:00.352013 | orchestrator | Thursday 16 April 2026 05:25:19 +0000 (0:00:00.294) 0:04:48.941 ******** 2026-04-16 05:26:00.352032 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.352051 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.352068 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.352087 | orchestrator | 2026-04-16 05:26:00.352105 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-16 05:26:00.352124 | orchestrator | Thursday 16 April 2026 05:25:19 +0000 (0:00:00.569) 0:04:49.510 ******** 2026-04-16 05:26:00.352164 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.352183 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.352200 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.352216 | orchestrator | 2026-04-16 05:26:00.352234 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-16 05:26:00.352254 | orchestrator | Thursday 16 April 2026 05:25:20 +0000 (0:00:00.558) 0:04:50.069 ******** 2026-04-16 05:26:00.352272 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.352292 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.352310 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.352329 | orchestrator | 2026-04-16 05:26:00.352347 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-16 05:26:00.352437 | orchestrator | Thursday 16 April 2026 05:25:21 +0000 (0:00:00.650) 0:04:50.720 ******** 2026-04-16 05:26:00.352461 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.352478 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.352496 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.352511 | orchestrator | 2026-04-16 05:26:00.352528 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-16 05:26:00.352545 | orchestrator | Thursday 16 April 2026 05:25:21 +0000 (0:00:00.610) 0:04:51.331 ******** 2026-04-16 05:26:00.352563 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.352581 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.352598 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.352615 | orchestrator | 2026-04-16 05:26:00.352633 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-16 05:26:00.352651 | orchestrator | Thursday 16 April 2026 05:25:22 +0000 (0:00:00.868) 0:04:52.199 ******** 2026-04-16 05:26:00.352670 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.352687 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.352705 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.352722 | orchestrator | 2026-04-16 05:26:00.352740 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-16 05:26:00.352755 | orchestrator | Thursday 16 April 2026 05:25:23 +0000 (0:00:00.867) 0:04:53.066 ******** 2026-04-16 05:26:00.352772 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.352788 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.352807 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.352825 | orchestrator | 2026-04-16 05:26:00.352841 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-16 05:26:00.352856 | orchestrator | Thursday 16 April 2026 05:25:24 +0000 (0:00:00.840) 0:04:53.906 ******** 2026-04-16 05:26:00.352872 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:26:00.352888 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:26:00.352904 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:26:00.352920 | orchestrator | 2026-04-16 05:26:00.352936 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-16 05:26:00.352952 | orchestrator | Thursday 16 April 2026 05:25:29 +0000 (0:00:04.768) 0:04:58.675 ******** 2026-04-16 05:26:00.352968 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.352983 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.352998 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.353013 | orchestrator | 2026-04-16 05:26:00.353027 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-16 05:26:00.353042 | orchestrator | Thursday 16 April 2026 05:25:31 +0000 (0:00:02.727) 0:05:01.403 ******** 2026-04-16 05:26:00.353059 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:26:00.353075 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:26:00.353090 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:26:00.353106 | orchestrator | 2026-04-16 05:26:00.353122 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-16 05:26:00.353139 | orchestrator | Thursday 16 April 2026 05:25:42 +0000 (0:00:10.502) 0:05:11.905 ******** 2026-04-16 05:26:00.353156 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.353172 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.353188 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.353204 | orchestrator | 2026-04-16 05:26:00.353221 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-16 05:26:00.353238 | orchestrator | Thursday 16 April 2026 05:25:47 +0000 (0:00:04.738) 0:05:16.643 ******** 2026-04-16 05:26:00.353253 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:26:00.353271 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:26:00.353281 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:26:00.353291 | orchestrator | 2026-04-16 05:26:00.353301 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-16 05:26:00.353311 | orchestrator | Thursday 16 April 2026 05:25:51 +0000 (0:00:04.271) 0:05:20.915 ******** 2026-04-16 05:26:00.353339 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.353352 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.353368 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.353384 | orchestrator | 2026-04-16 05:26:00.353428 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-16 05:26:00.353444 | orchestrator | Thursday 16 April 2026 05:25:52 +0000 (0:00:00.657) 0:05:21.572 ******** 2026-04-16 05:26:00.353460 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.353476 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.353491 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.353506 | orchestrator | 2026-04-16 05:26:00.353550 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-16 05:26:00.353568 | orchestrator | Thursday 16 April 2026 05:25:52 +0000 (0:00:00.347) 0:05:21.919 ******** 2026-04-16 05:26:00.353585 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.353600 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.353618 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.353628 | orchestrator | 2026-04-16 05:26:00.353638 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-16 05:26:00.353648 | orchestrator | Thursday 16 April 2026 05:25:52 +0000 (0:00:00.363) 0:05:22.283 ******** 2026-04-16 05:26:00.353658 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.353668 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.353677 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.353687 | orchestrator | 2026-04-16 05:26:00.353697 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-16 05:26:00.353706 | orchestrator | Thursday 16 April 2026 05:25:53 +0000 (0:00:00.339) 0:05:22.623 ******** 2026-04-16 05:26:00.353716 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.353726 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.353745 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.353755 | orchestrator | 2026-04-16 05:26:00.353765 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-16 05:26:00.353774 | orchestrator | Thursday 16 April 2026 05:25:53 +0000 (0:00:00.623) 0:05:23.247 ******** 2026-04-16 05:26:00.353784 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:00.353793 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:00.353803 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:00.353812 | orchestrator | 2026-04-16 05:26:00.353822 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-16 05:26:00.353831 | orchestrator | Thursday 16 April 2026 05:25:54 +0000 (0:00:00.318) 0:05:23.565 ******** 2026-04-16 05:26:00.353841 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.353850 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.353860 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.353869 | orchestrator | 2026-04-16 05:26:00.353879 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-16 05:26:00.353889 | orchestrator | Thursday 16 April 2026 05:25:58 +0000 (0:00:04.768) 0:05:28.333 ******** 2026-04-16 05:26:00.353898 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:00.353908 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:00.353917 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:00.353927 | orchestrator | 2026-04-16 05:26:00.353936 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:26:00.353947 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-16 05:26:00.353959 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-16 05:26:00.353969 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-04-16 05:26:00.353978 | orchestrator | 2026-04-16 05:26:00.353988 | orchestrator | 2026-04-16 05:26:00.354007 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:26:00.354068 | orchestrator | Thursday 16 April 2026 05:25:59 +0000 (0:00:00.787) 0:05:29.121 ******** 2026-04-16 05:26:00.354081 | orchestrator | =============================================================================== 2026-04-16 05:26:00.354091 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.50s 2026-04-16 05:26:00.354101 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.88s 2026-04-16 05:26:00.354111 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.47s 2026-04-16 05:26:00.354121 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.77s 2026-04-16 05:26:00.354130 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.77s 2026-04-16 05:26:00.354140 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.74s 2026-04-16 05:26:00.354150 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.27s 2026-04-16 05:26:00.354159 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.17s 2026-04-16 05:26:00.354169 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.06s 2026-04-16 05:26:00.354178 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.03s 2026-04-16 05:26:00.354188 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.02s 2026-04-16 05:26:00.354198 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.75s 2026-04-16 05:26:00.354207 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.41s 2026-04-16 05:26:00.354217 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.36s 2026-04-16 05:26:00.354227 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.34s 2026-04-16 05:26:00.354239 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.24s 2026-04-16 05:26:00.354256 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.19s 2026-04-16 05:26:00.354271 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.16s 2026-04-16 05:26:00.354287 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.16s 2026-04-16 05:26:00.354301 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.13s 2026-04-16 05:26:02.552785 | orchestrator | 2026-04-16 05:26:02 | INFO  | Task 505979e7-78fb-4eb6-8a17-7c980071700c (opensearch) was prepared for execution. 2026-04-16 05:26:02.552888 | orchestrator | 2026-04-16 05:26:02 | INFO  | It takes a moment until task 505979e7-78fb-4eb6-8a17-7c980071700c (opensearch) has been started and output is visible here. 2026-04-16 05:26:11.783985 | orchestrator | 2026-04-16 05:26:11.784100 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:26:11.784117 | orchestrator | 2026-04-16 05:26:11.784129 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:26:11.784141 | orchestrator | Thursday 16 April 2026 05:26:06 +0000 (0:00:00.183) 0:00:00.183 ******** 2026-04-16 05:26:11.784153 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:26:11.784165 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:26:11.784176 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:26:11.784187 | orchestrator | 2026-04-16 05:26:11.784198 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:26:11.784209 | orchestrator | Thursday 16 April 2026 05:26:06 +0000 (0:00:00.266) 0:00:00.450 ******** 2026-04-16 05:26:11.784237 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-16 05:26:11.784249 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-16 05:26:11.784260 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-16 05:26:11.784270 | orchestrator | 2026-04-16 05:26:11.784281 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-16 05:26:11.784314 | orchestrator | 2026-04-16 05:26:11.784325 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 05:26:11.784336 | orchestrator | Thursday 16 April 2026 05:26:06 +0000 (0:00:00.371) 0:00:00.821 ******** 2026-04-16 05:26:11.784347 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:26:11.784358 | orchestrator | 2026-04-16 05:26:11.784369 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-16 05:26:11.784380 | orchestrator | Thursday 16 April 2026 05:26:07 +0000 (0:00:00.431) 0:00:01.253 ******** 2026-04-16 05:26:11.784391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 05:26:11.784459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 05:26:11.784471 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 05:26:11.784482 | orchestrator | 2026-04-16 05:26:11.784493 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-16 05:26:11.784503 | orchestrator | Thursday 16 April 2026 05:26:07 +0000 (0:00:00.600) 0:00:01.853 ******** 2026-04-16 05:26:11.784519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:11.784538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:11.784572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:11.784598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:11.784623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:11.784638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:11.784652 | orchestrator | 2026-04-16 05:26:11.784665 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 05:26:11.784678 | orchestrator | Thursday 16 April 2026 05:26:09 +0000 (0:00:01.375) 0:00:03.228 ******** 2026-04-16 05:26:11.784691 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:26:11.784703 | orchestrator | 2026-04-16 05:26:11.784716 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-16 05:26:11.784729 | orchestrator | Thursday 16 April 2026 05:26:09 +0000 (0:00:00.453) 0:00:03.682 ******** 2026-04-16 05:26:11.784751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:12.431384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:12.431557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:12.431584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:12.431601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:12.431671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:12.431691 | orchestrator | 2026-04-16 05:26:12.431707 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-16 05:26:12.431723 | orchestrator | Thursday 16 April 2026 05:26:11 +0000 (0:00:02.180) 0:00:05.862 ******** 2026-04-16 05:26:12.431738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:26:12.431755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:26:12.431769 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:12.431783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:26:12.431816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:26:13.269663 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:13.269755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:26:13.269771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:26:13.269781 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:13.269789 | orchestrator | 2026-04-16 05:26:13.269798 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-16 05:26:13.269808 | orchestrator | Thursday 16 April 2026 05:26:12 +0000 (0:00:00.646) 0:00:06.509 ******** 2026-04-16 05:26:13.269835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:26:13.269859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:26:13.269883 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:26:13.269893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:26:13.269902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:26:13.269910 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:26:13.269924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-04-16 05:26:13.269938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-04-16 05:26:13.269947 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:26:13.269955 | orchestrator | 2026-04-16 05:26:13.269963 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-16 05:26:13.269976 | orchestrator | Thursday 16 April 2026 05:26:13 +0000 (0:00:00.833) 0:00:07.342 ******** 2026-04-16 05:26:20.929605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:20.929736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:20.929755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:20.929807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:20.929844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:20.929858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:26:20.929878 | orchestrator | 2026-04-16 05:26:20.929891 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-16 05:26:20.929912 | orchestrator | Thursday 16 April 2026 05:26:15 +0000 (0:00:02.211) 0:00:09.554 ******** 2026-04-16 05:26:20.929931 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:26:20.930079 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:26:20.930104 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:26:20.930121 | orchestrator | 2026-04-16 05:26:20.930140 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-16 05:26:20.930157 | orchestrator | Thursday 16 April 2026 05:26:17 +0000 (0:00:02.170) 0:00:11.724 ******** 2026-04-16 05:26:20.930176 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:26:20.930196 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:26:20.930214 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:26:20.930233 | orchestrator | 2026-04-16 05:26:20.930253 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-04-16 05:26:20.930274 | orchestrator | Thursday 16 April 2026 05:26:19 +0000 (0:00:01.713) 0:00:13.437 ******** 2026-04-16 05:26:20.930294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:20.930324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:26:20.930355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-04-16 05:28:55.876279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:28:55.876489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:28:55.876531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-04-16 05:28:55.876547 | orchestrator | 2026-04-16 05:28:55.876563 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 05:28:55.876578 | orchestrator | Thursday 16 April 2026 05:26:20 +0000 (0:00:01.568) 0:00:15.006 ******** 2026-04-16 05:28:55.876590 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:28:55.876604 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:28:55.876617 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:28:55.876630 | orchestrator | 2026-04-16 05:28:55.876645 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-16 05:28:55.876660 | orchestrator | Thursday 16 April 2026 05:26:21 +0000 (0:00:00.257) 0:00:15.264 ******** 2026-04-16 05:28:55.876674 | orchestrator | 2026-04-16 05:28:55.876688 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-16 05:28:55.876702 | orchestrator | Thursday 16 April 2026 05:26:21 +0000 (0:00:00.060) 0:00:15.324 ******** 2026-04-16 05:28:55.876716 | orchestrator | 2026-04-16 05:28:55.876730 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-16 05:28:55.876756 | orchestrator | Thursday 16 April 2026 05:26:21 +0000 (0:00:00.061) 0:00:15.386 ******** 2026-04-16 05:28:55.876770 | orchestrator | 2026-04-16 05:28:55.876785 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-16 05:28:55.876824 | orchestrator | Thursday 16 April 2026 05:26:21 +0000 (0:00:00.060) 0:00:15.446 ******** 2026-04-16 05:28:55.876839 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:28:55.876853 | orchestrator | 2026-04-16 05:28:55.876867 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-16 05:28:55.876880 | orchestrator | Thursday 16 April 2026 05:26:21 +0000 (0:00:00.194) 0:00:15.641 ******** 2026-04-16 05:28:55.876894 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:28:55.876907 | orchestrator | 2026-04-16 05:28:55.876921 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-16 05:28:55.876934 | orchestrator | Thursday 16 April 2026 05:26:22 +0000 (0:00:00.557) 0:00:16.198 ******** 2026-04-16 05:28:55.876949 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:28:55.876964 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:28:55.876979 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:28:55.877013 | orchestrator | 2026-04-16 05:28:55.877028 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-16 05:28:55.877041 | orchestrator | Thursday 16 April 2026 05:27:29 +0000 (0:01:07.077) 0:01:23.276 ******** 2026-04-16 05:28:55.877054 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:28:55.877067 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:28:55.877080 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:28:55.877093 | orchestrator | 2026-04-16 05:28:55.877131 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 05:28:55.877161 | orchestrator | Thursday 16 April 2026 05:28:44 +0000 (0:01:15.796) 0:02:39.073 ******** 2026-04-16 05:28:55.877176 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:28:55.877190 | orchestrator | 2026-04-16 05:28:55.877203 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-16 05:28:55.877217 | orchestrator | Thursday 16 April 2026 05:28:45 +0000 (0:00:00.499) 0:02:39.573 ******** 2026-04-16 05:28:55.877230 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:28:55.877243 | orchestrator | 2026-04-16 05:28:55.877257 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-16 05:28:55.877271 | orchestrator | Thursday 16 April 2026 05:28:48 +0000 (0:00:02.707) 0:02:42.280 ******** 2026-04-16 05:28:55.877284 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:28:55.877296 | orchestrator | 2026-04-16 05:28:55.877305 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-16 05:28:55.877312 | orchestrator | Thursday 16 April 2026 05:28:50 +0000 (0:00:02.369) 0:02:44.650 ******** 2026-04-16 05:28:55.877320 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:28:55.877328 | orchestrator | 2026-04-16 05:28:55.877336 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-16 05:28:55.877344 | orchestrator | Thursday 16 April 2026 05:28:53 +0000 (0:00:02.714) 0:02:47.364 ******** 2026-04-16 05:28:55.877352 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:28:55.877359 | orchestrator | 2026-04-16 05:28:55.877367 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:28:55.877376 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 05:28:55.877386 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 05:28:55.877405 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 05:28:55.877413 | orchestrator | 2026-04-16 05:28:55.877446 | orchestrator | 2026-04-16 05:28:55.877465 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:28:55.877474 | orchestrator | Thursday 16 April 2026 05:28:55 +0000 (0:00:02.567) 0:02:49.932 ******** 2026-04-16 05:28:55.877482 | orchestrator | =============================================================================== 2026-04-16 05:28:55.877489 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.80s 2026-04-16 05:28:55.877498 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.08s 2026-04-16 05:28:55.877506 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.71s 2026-04-16 05:28:55.877514 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.71s 2026-04-16 05:28:55.877522 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-04-16 05:28:55.877529 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.37s 2026-04-16 05:28:55.877537 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.21s 2026-04-16 05:28:55.877545 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.18s 2026-04-16 05:28:55.877553 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.17s 2026-04-16 05:28:55.877560 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.71s 2026-04-16 05:28:55.877568 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.57s 2026-04-16 05:28:55.877576 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.38s 2026-04-16 05:28:55.877584 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.83s 2026-04-16 05:28:55.877592 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.65s 2026-04-16 05:28:55.877600 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.60s 2026-04-16 05:28:55.877608 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.56s 2026-04-16 05:28:55.877628 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-04-16 05:28:56.232714 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2026-04-16 05:28:56.232803 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.43s 2026-04-16 05:28:56.232814 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.37s 2026-04-16 05:28:58.423038 | orchestrator | 2026-04-16 05:28:58 | INFO  | Task dbd09a69-a065-4fce-9cbb-d1e20ada63a3 (memcached) was prepared for execution. 2026-04-16 05:28:58.423138 | orchestrator | 2026-04-16 05:28:58 | INFO  | It takes a moment until task dbd09a69-a065-4fce-9cbb-d1e20ada63a3 (memcached) has been started and output is visible here. 2026-04-16 05:29:14.641690 | orchestrator | 2026-04-16 05:29:14.641804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:29:14.641821 | orchestrator | 2026-04-16 05:29:14.641833 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:29:14.641845 | orchestrator | Thursday 16 April 2026 05:29:02 +0000 (0:00:00.240) 0:00:00.240 ******** 2026-04-16 05:29:14.641857 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:29:14.641869 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:29:14.641880 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:29:14.641891 | orchestrator | 2026-04-16 05:29:14.641902 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:29:14.641913 | orchestrator | Thursday 16 April 2026 05:29:02 +0000 (0:00:00.273) 0:00:00.514 ******** 2026-04-16 05:29:14.641924 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-16 05:29:14.641936 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-16 05:29:14.641947 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-16 05:29:14.641957 | orchestrator | 2026-04-16 05:29:14.641968 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-16 05:29:14.642007 | orchestrator | 2026-04-16 05:29:14.642074 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-16 05:29:14.642086 | orchestrator | Thursday 16 April 2026 05:29:03 +0000 (0:00:00.401) 0:00:00.915 ******** 2026-04-16 05:29:14.642098 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:29:14.642109 | orchestrator | 2026-04-16 05:29:14.642120 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-16 05:29:14.642141 | orchestrator | Thursday 16 April 2026 05:29:03 +0000 (0:00:00.451) 0:00:01.367 ******** 2026-04-16 05:29:14.642154 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-16 05:29:14.642174 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-16 05:29:14.642186 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-16 05:29:14.642197 | orchestrator | 2026-04-16 05:29:14.642208 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-16 05:29:14.642222 | orchestrator | Thursday 16 April 2026 05:29:04 +0000 (0:00:00.634) 0:00:02.002 ******** 2026-04-16 05:29:14.642234 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-04-16 05:29:14.642247 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-04-16 05:29:14.642259 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-04-16 05:29:14.642271 | orchestrator | 2026-04-16 05:29:14.642284 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-04-16 05:29:14.642297 | orchestrator | Thursday 16 April 2026 05:29:05 +0000 (0:00:01.603) 0:00:03.606 ******** 2026-04-16 05:29:14.642325 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:29:14.642338 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:29:14.642350 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:29:14.642362 | orchestrator | 2026-04-16 05:29:14.642375 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-16 05:29:14.642387 | orchestrator | Thursday 16 April 2026 05:29:07 +0000 (0:00:01.398) 0:00:05.004 ******** 2026-04-16 05:29:14.642398 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:29:14.642409 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:29:14.642443 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:29:14.642455 | orchestrator | 2026-04-16 05:29:14.642466 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:29:14.642477 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:29:14.642490 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:29:14.642501 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:29:14.642511 | orchestrator | 2026-04-16 05:29:14.642522 | orchestrator | 2026-04-16 05:29:14.642533 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:29:14.642544 | orchestrator | Thursday 16 April 2026 05:29:14 +0000 (0:00:07.099) 0:00:12.104 ******** 2026-04-16 05:29:14.642555 | orchestrator | =============================================================================== 2026-04-16 05:29:14.642566 | orchestrator | memcached : Restart memcached container --------------------------------- 7.10s 2026-04-16 05:29:14.642576 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.60s 2026-04-16 05:29:14.642587 | orchestrator | memcached : Check memcached container ----------------------------------- 1.40s 2026-04-16 05:29:14.642598 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.63s 2026-04-16 05:29:14.642609 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.45s 2026-04-16 05:29:14.642620 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-04-16 05:29:14.642631 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-04-16 05:29:16.853515 | orchestrator | 2026-04-16 05:29:16 | INFO  | Task eec60b2b-d254-4fff-8965-befb73485e9b (redis) was prepared for execution. 2026-04-16 05:29:16.853618 | orchestrator | 2026-04-16 05:29:16 | INFO  | It takes a moment until task eec60b2b-d254-4fff-8965-befb73485e9b (redis) has been started and output is visible here. 2026-04-16 05:29:24.741583 | orchestrator | 2026-04-16 05:29:24.741682 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:29:24.741695 | orchestrator | 2026-04-16 05:29:24.741704 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:29:24.741713 | orchestrator | Thursday 16 April 2026 05:29:20 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-16 05:29:24.741721 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:29:24.741730 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:29:24.741738 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:29:24.741746 | orchestrator | 2026-04-16 05:29:24.741754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:29:24.741762 | orchestrator | Thursday 16 April 2026 05:29:20 +0000 (0:00:00.206) 0:00:00.396 ******** 2026-04-16 05:29:24.741770 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-16 05:29:24.741778 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-16 05:29:24.741786 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-16 05:29:24.741793 | orchestrator | 2026-04-16 05:29:24.741801 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-16 05:29:24.741809 | orchestrator | 2026-04-16 05:29:24.741817 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-16 05:29:24.741825 | orchestrator | Thursday 16 April 2026 05:29:21 +0000 (0:00:00.274) 0:00:00.670 ******** 2026-04-16 05:29:24.741833 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:29:24.741842 | orchestrator | 2026-04-16 05:29:24.741849 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-16 05:29:24.741857 | orchestrator | Thursday 16 April 2026 05:29:21 +0000 (0:00:00.350) 0:00:01.021 ******** 2026-04-16 05:29:24.741869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.741883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.741893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.741923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.741948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.741957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.741966 | orchestrator | 2026-04-16 05:29:24.741974 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-16 05:29:24.741982 | orchestrator | Thursday 16 April 2026 05:29:22 +0000 (0:00:00.976) 0:00:01.997 ******** 2026-04-16 05:29:24.741990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.742154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.742175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.742193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:24.742212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667531 | orchestrator | 2026-04-16 05:29:28.667569 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-16 05:29:28.667589 | orchestrator | Thursday 16 April 2026 05:29:24 +0000 (0:00:02.174) 0:00:04.171 ******** 2026-04-16 05:29:28.667610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667785 | orchestrator | 2026-04-16 05:29:28.667799 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-04-16 05:29:28.667811 | orchestrator | Thursday 16 April 2026 05:29:27 +0000 (0:00:02.312) 0:00:06.484 ******** 2026-04-16 05:29:28.667824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:28.667944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 05:29:39.522508 | orchestrator | 2026-04-16 05:29:39.522644 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-16 05:29:39.522670 | orchestrator | Thursday 16 April 2026 05:29:28 +0000 (0:00:01.406) 0:00:07.890 ******** 2026-04-16 05:29:39.522689 | orchestrator | 2026-04-16 05:29:39.522706 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-16 05:29:39.522725 | orchestrator | Thursday 16 April 2026 05:29:28 +0000 (0:00:00.063) 0:00:07.953 ******** 2026-04-16 05:29:39.522742 | orchestrator | 2026-04-16 05:29:39.522760 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-16 05:29:39.522777 | orchestrator | Thursday 16 April 2026 05:29:28 +0000 (0:00:00.061) 0:00:08.015 ******** 2026-04-16 05:29:39.522795 | orchestrator | 2026-04-16 05:29:39.522813 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-16 05:29:39.522831 | orchestrator | Thursday 16 April 2026 05:29:28 +0000 (0:00:00.075) 0:00:08.090 ******** 2026-04-16 05:29:39.522849 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:29:39.522868 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:29:39.522885 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:29:39.522903 | orchestrator | 2026-04-16 05:29:39.522921 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-16 05:29:39.522939 | orchestrator | Thursday 16 April 2026 05:29:36 +0000 (0:00:07.510) 0:00:15.601 ******** 2026-04-16 05:29:39.522992 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:29:39.523011 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:29:39.523026 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:29:39.523041 | orchestrator | 2026-04-16 05:29:39.523057 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:29:39.523074 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:29:39.523091 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:29:39.523122 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:29:39.523137 | orchestrator | 2026-04-16 05:29:39.523152 | orchestrator | 2026-04-16 05:29:39.523168 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:29:39.523183 | orchestrator | Thursday 16 April 2026 05:29:39 +0000 (0:00:03.063) 0:00:18.665 ******** 2026-04-16 05:29:39.523198 | orchestrator | =============================================================================== 2026-04-16 05:29:39.523213 | orchestrator | redis : Restart redis container ----------------------------------------- 7.51s 2026-04-16 05:29:39.523228 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.06s 2026-04-16 05:29:39.523243 | orchestrator | redis : Copying over redis config files --------------------------------- 2.31s 2026-04-16 05:29:39.523259 | orchestrator | redis : Copying over default config.json files -------------------------- 2.17s 2026-04-16 05:29:39.523274 | orchestrator | redis : Check redis containers ------------------------------------------ 1.41s 2026-04-16 05:29:39.523289 | orchestrator | redis : Ensuring config directories exist ------------------------------- 0.98s 2026-04-16 05:29:39.523304 | orchestrator | redis : include_tasks --------------------------------------------------- 0.35s 2026-04-16 05:29:39.523319 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.27s 2026-04-16 05:29:39.523333 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.21s 2026-04-16 05:29:39.523348 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-04-16 05:29:41.679937 | orchestrator | 2026-04-16 05:29:41 | INFO  | Task ddb7e8d6-bf72-4d9e-93fe-80ddb76202de (mariadb) was prepared for execution. 2026-04-16 05:29:41.680040 | orchestrator | 2026-04-16 05:29:41 | INFO  | It takes a moment until task ddb7e8d6-bf72-4d9e-93fe-80ddb76202de (mariadb) has been started and output is visible here. 2026-04-16 05:29:53.700160 | orchestrator | 2026-04-16 05:29:53.700269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:29:53.700286 | orchestrator | 2026-04-16 05:29:53.700298 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:29:53.700314 | orchestrator | Thursday 16 April 2026 05:29:45 +0000 (0:00:00.151) 0:00:00.151 ******** 2026-04-16 05:29:53.700334 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:29:53.700355 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:29:53.700374 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:29:53.700394 | orchestrator | 2026-04-16 05:29:53.700414 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:29:53.700484 | orchestrator | Thursday 16 April 2026 05:29:45 +0000 (0:00:00.220) 0:00:00.371 ******** 2026-04-16 05:29:53.700499 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-16 05:29:53.700511 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-16 05:29:53.700522 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-16 05:29:53.700533 | orchestrator | 2026-04-16 05:29:53.700558 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-16 05:29:53.700569 | orchestrator | 2026-04-16 05:29:53.700580 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-16 05:29:53.700619 | orchestrator | Thursday 16 April 2026 05:29:46 +0000 (0:00:00.430) 0:00:00.801 ******** 2026-04-16 05:29:53.700630 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 05:29:53.700642 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 05:29:53.700652 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 05:29:53.700663 | orchestrator | 2026-04-16 05:29:53.700674 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 05:29:53.700687 | orchestrator | Thursday 16 April 2026 05:29:46 +0000 (0:00:00.304) 0:00:01.106 ******** 2026-04-16 05:29:53.700701 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:29:53.700714 | orchestrator | 2026-04-16 05:29:53.700727 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-16 05:29:53.700740 | orchestrator | Thursday 16 April 2026 05:29:47 +0000 (0:00:00.422) 0:00:01.529 ******** 2026-04-16 05:29:53.700776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:29:53.700819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:29:53.700851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:29:53.700866 | orchestrator | 2026-04-16 05:29:53.700879 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-16 05:29:53.700891 | orchestrator | Thursday 16 April 2026 05:29:49 +0000 (0:00:02.144) 0:00:03.673 ******** 2026-04-16 05:29:53.700903 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:29:53.700917 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:29:53.700929 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:29:53.700941 | orchestrator | 2026-04-16 05:29:53.700954 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-16 05:29:53.700966 | orchestrator | Thursday 16 April 2026 05:29:49 +0000 (0:00:00.512) 0:00:04.185 ******** 2026-04-16 05:29:53.700979 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:29:53.700991 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:29:53.701003 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:29:53.701024 | orchestrator | 2026-04-16 05:29:53.701044 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-16 05:29:53.701064 | orchestrator | Thursday 16 April 2026 05:29:51 +0000 (0:00:01.344) 0:00:05.530 ******** 2026-04-16 05:29:53.701098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:30:00.345535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:30:00.345615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:30:00.345634 | orchestrator | 2026-04-16 05:30:00.345639 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-16 05:30:00.345645 | orchestrator | Thursday 16 April 2026 05:29:53 +0000 (0:00:02.635) 0:00:08.165 ******** 2026-04-16 05:30:00.345649 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:30:00.345654 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:30:00.345658 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:30:00.345662 | orchestrator | 2026-04-16 05:30:00.345666 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-16 05:30:00.345682 | orchestrator | Thursday 16 April 2026 05:29:54 +0000 (0:00:01.015) 0:00:09.180 ******** 2026-04-16 05:30:00.345686 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:30:00.345690 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:30:00.345694 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:30:00.345697 | orchestrator | 2026-04-16 05:30:00.345701 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 05:30:00.345705 | orchestrator | Thursday 16 April 2026 05:29:57 +0000 (0:00:03.119) 0:00:12.300 ******** 2026-04-16 05:30:00.345710 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:30:00.345714 | orchestrator | 2026-04-16 05:30:00.345717 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-16 05:30:00.345721 | orchestrator | Thursday 16 April 2026 05:29:58 +0000 (0:00:00.455) 0:00:12.755 ******** 2026-04-16 05:30:00.345729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:00.345737 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:30:00.345745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:04.699693 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:30:04.699815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:04.699854 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:30:04.699866 | orchestrator | 2026-04-16 05:30:04.699876 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-16 05:30:04.699887 | orchestrator | Thursday 16 April 2026 05:30:00 +0000 (0:00:02.057) 0:00:14.813 ******** 2026-04-16 05:30:04.699899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:04.699910 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:30:04.699944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:04.699964 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:30:04.699987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:04.700008 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:30:04.700019 | orchestrator | 2026-04-16 05:30:04.700029 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-16 05:30:04.700039 | orchestrator | Thursday 16 April 2026 05:30:02 +0000 (0:00:02.254) 0:00:17.067 ******** 2026-04-16 05:30:04.700064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:07.219405 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:30:07.219493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:07.219505 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:30:07.219520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 05:30:07.219526 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:30:07.219544 | orchestrator | 2026-04-16 05:30:07.219551 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-04-16 05:30:07.219557 | orchestrator | Thursday 16 April 2026 05:30:04 +0000 (0:00:02.104) 0:00:19.172 ******** 2026-04-16 05:30:07.219574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:30:07.219582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:30:07.219596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 05:32:15.370336 | orchestrator | 2026-04-16 05:32:15.370453 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-16 05:32:15.370532 | orchestrator | Thursday 16 April 2026 05:30:07 +0000 (0:00:02.515) 0:00:21.687 ******** 2026-04-16 05:32:15.370547 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:15.370559 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:32:15.370570 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:32:15.370581 | orchestrator | 2026-04-16 05:32:15.370592 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-16 05:32:15.370604 | orchestrator | Thursday 16 April 2026 05:30:08 +0000 (0:00:00.808) 0:00:22.495 ******** 2026-04-16 05:32:15.370615 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.370627 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:15.370637 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:15.370648 | orchestrator | 2026-04-16 05:32:15.370659 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-16 05:32:15.370670 | orchestrator | Thursday 16 April 2026 05:30:08 +0000 (0:00:00.502) 0:00:22.998 ******** 2026-04-16 05:32:15.370681 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.370691 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:15.370702 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:15.370712 | orchestrator | 2026-04-16 05:32:15.370723 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-16 05:32:15.370734 | orchestrator | Thursday 16 April 2026 05:30:08 +0000 (0:00:00.305) 0:00:23.303 ******** 2026-04-16 05:32:15.370746 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-04-16 05:32:15.370759 | orchestrator | ...ignoring 2026-04-16 05:32:15.370770 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-04-16 05:32:15.370781 | orchestrator | ...ignoring 2026-04-16 05:32:15.370792 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-04-16 05:32:15.370803 | orchestrator | ...ignoring 2026-04-16 05:32:15.370814 | orchestrator | 2026-04-16 05:32:15.370848 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-16 05:32:15.370860 | orchestrator | Thursday 16 April 2026 05:30:19 +0000 (0:00:10.786) 0:00:34.090 ******** 2026-04-16 05:32:15.370873 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.370885 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:15.370898 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:15.370911 | orchestrator | 2026-04-16 05:32:15.370922 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-16 05:32:15.370935 | orchestrator | Thursday 16 April 2026 05:30:20 +0000 (0:00:00.392) 0:00:34.482 ******** 2026-04-16 05:32:15.370947 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.370960 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.370972 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.370984 | orchestrator | 2026-04-16 05:32:15.370997 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-16 05:32:15.371010 | orchestrator | Thursday 16 April 2026 05:30:20 +0000 (0:00:00.607) 0:00:35.090 ******** 2026-04-16 05:32:15.371023 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.371035 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.371047 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.371059 | orchestrator | 2026-04-16 05:32:15.371086 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-16 05:32:15.371101 | orchestrator | Thursday 16 April 2026 05:30:21 +0000 (0:00:00.409) 0:00:35.499 ******** 2026-04-16 05:32:15.371113 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.371125 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.371137 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.371149 | orchestrator | 2026-04-16 05:32:15.371161 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-16 05:32:15.371174 | orchestrator | Thursday 16 April 2026 05:30:21 +0000 (0:00:00.425) 0:00:35.925 ******** 2026-04-16 05:32:15.371184 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.371195 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:15.371205 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:15.371216 | orchestrator | 2026-04-16 05:32:15.371227 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-16 05:32:15.371238 | orchestrator | Thursday 16 April 2026 05:30:21 +0000 (0:00:00.402) 0:00:36.327 ******** 2026-04-16 05:32:15.371249 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.371260 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.371271 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.371281 | orchestrator | 2026-04-16 05:32:15.371292 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 05:32:15.371302 | orchestrator | Thursday 16 April 2026 05:30:22 +0000 (0:00:00.774) 0:00:37.101 ******** 2026-04-16 05:32:15.371313 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.371324 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.371335 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-04-16 05:32:15.371345 | orchestrator | 2026-04-16 05:32:15.371356 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-04-16 05:32:15.371367 | orchestrator | Thursday 16 April 2026 05:30:22 +0000 (0:00:00.351) 0:00:37.452 ******** 2026-04-16 05:32:15.371377 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:15.371388 | orchestrator | 2026-04-16 05:32:15.371398 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-04-16 05:32:15.371409 | orchestrator | Thursday 16 April 2026 05:30:32 +0000 (0:00:09.854) 0:00:47.307 ******** 2026-04-16 05:32:15.371420 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.371430 | orchestrator | 2026-04-16 05:32:15.371441 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 05:32:15.371452 | orchestrator | Thursday 16 April 2026 05:30:32 +0000 (0:00:00.128) 0:00:47.436 ******** 2026-04-16 05:32:15.371463 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.371533 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.371546 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.371557 | orchestrator | 2026-04-16 05:32:15.371567 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-04-16 05:32:15.371578 | orchestrator | Thursday 16 April 2026 05:30:33 +0000 (0:00:00.933) 0:00:48.370 ******** 2026-04-16 05:32:15.371589 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:15.371599 | orchestrator | 2026-04-16 05:32:15.371610 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-04-16 05:32:15.371621 | orchestrator | Thursday 16 April 2026 05:30:40 +0000 (0:00:06.539) 0:00:54.910 ******** 2026-04-16 05:32:15.371631 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.371642 | orchestrator | 2026-04-16 05:32:15.371653 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-04-16 05:32:15.371663 | orchestrator | Thursday 16 April 2026 05:30:42 +0000 (0:00:01.668) 0:00:56.578 ******** 2026-04-16 05:32:15.371727 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.371739 | orchestrator | 2026-04-16 05:32:15.371750 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-04-16 05:32:15.371761 | orchestrator | Thursday 16 April 2026 05:30:44 +0000 (0:00:02.229) 0:00:58.808 ******** 2026-04-16 05:32:15.371772 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:15.371783 | orchestrator | 2026-04-16 05:32:15.371793 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-16 05:32:15.371804 | orchestrator | Thursday 16 April 2026 05:30:44 +0000 (0:00:00.112) 0:00:58.921 ******** 2026-04-16 05:32:15.371814 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.371825 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:15.371836 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:15.371846 | orchestrator | 2026-04-16 05:32:15.371857 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-16 05:32:15.371868 | orchestrator | Thursday 16 April 2026 05:30:44 +0000 (0:00:00.293) 0:00:59.214 ******** 2026-04-16 05:32:15.371879 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:15.371889 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-16 05:32:15.371900 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:32:15.371910 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:32:15.371921 | orchestrator | 2026-04-16 05:32:15.371931 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-16 05:32:15.371942 | orchestrator | skipping: no hosts matched 2026-04-16 05:32:15.371953 | orchestrator | 2026-04-16 05:32:15.371963 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-16 05:32:15.371974 | orchestrator | 2026-04-16 05:32:15.371985 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-16 05:32:15.371995 | orchestrator | Thursday 16 April 2026 05:30:45 +0000 (0:00:00.436) 0:00:59.651 ******** 2026-04-16 05:32:15.372006 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:32:15.372016 | orchestrator | 2026-04-16 05:32:15.372027 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-16 05:32:15.372038 | orchestrator | Thursday 16 April 2026 05:31:02 +0000 (0:00:16.973) 0:01:16.625 ******** 2026-04-16 05:32:15.372048 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:15.372059 | orchestrator | 2026-04-16 05:32:15.372070 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-16 05:32:15.372080 | orchestrator | Thursday 16 April 2026 05:31:18 +0000 (0:00:16.542) 0:01:33.167 ******** 2026-04-16 05:32:15.372091 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:15.372102 | orchestrator | 2026-04-16 05:32:15.372116 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-16 05:32:15.372128 | orchestrator | 2026-04-16 05:32:15.372144 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-16 05:32:15.372155 | orchestrator | Thursday 16 April 2026 05:31:20 +0000 (0:00:02.208) 0:01:35.375 ******** 2026-04-16 05:32:15.372174 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:32:15.372185 | orchestrator | 2026-04-16 05:32:15.372197 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-16 05:32:15.372216 | orchestrator | Thursday 16 April 2026 05:31:38 +0000 (0:00:17.380) 0:01:52.756 ******** 2026-04-16 05:32:15.372235 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:15.372251 | orchestrator | 2026-04-16 05:32:15.372268 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-16 05:32:15.372284 | orchestrator | Thursday 16 April 2026 05:31:54 +0000 (0:00:16.533) 0:02:09.289 ******** 2026-04-16 05:32:15.372301 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:15.372319 | orchestrator | 2026-04-16 05:32:15.372338 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-16 05:32:15.372358 | orchestrator | 2026-04-16 05:32:15.372375 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-16 05:32:15.372392 | orchestrator | Thursday 16 April 2026 05:31:57 +0000 (0:00:02.317) 0:02:11.607 ******** 2026-04-16 05:32:15.372403 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:15.372414 | orchestrator | 2026-04-16 05:32:15.372425 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-16 05:32:15.372435 | orchestrator | Thursday 16 April 2026 05:32:06 +0000 (0:00:09.690) 0:02:21.297 ******** 2026-04-16 05:32:15.372446 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.372456 | orchestrator | 2026-04-16 05:32:15.372494 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-16 05:32:15.372512 | orchestrator | Thursday 16 April 2026 05:32:12 +0000 (0:00:05.555) 0:02:26.853 ******** 2026-04-16 05:32:15.372522 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:15.372533 | orchestrator | 2026-04-16 05:32:15.372544 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-16 05:32:15.372555 | orchestrator | 2026-04-16 05:32:15.372566 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-16 05:32:15.372576 | orchestrator | Thursday 16 April 2026 05:32:14 +0000 (0:00:02.340) 0:02:29.193 ******** 2026-04-16 05:32:15.372587 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:32:15.372598 | orchestrator | 2026-04-16 05:32:15.372609 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-16 05:32:15.372630 | orchestrator | Thursday 16 April 2026 05:32:15 +0000 (0:00:00.638) 0:02:29.831 ******** 2026-04-16 05:32:27.687361 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:27.687546 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:27.687573 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:27.687586 | orchestrator | 2026-04-16 05:32:27.687599 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-16 05:32:27.687614 | orchestrator | Thursday 16 April 2026 05:32:17 +0000 (0:00:02.339) 0:02:32.171 ******** 2026-04-16 05:32:27.687626 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:27.687638 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:27.687651 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:27.687663 | orchestrator | 2026-04-16 05:32:27.687676 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-16 05:32:27.687688 | orchestrator | Thursday 16 April 2026 05:32:19 +0000 (0:00:02.165) 0:02:34.336 ******** 2026-04-16 05:32:27.687699 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:27.687712 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:27.687724 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:27.687736 | orchestrator | 2026-04-16 05:32:27.687748 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-16 05:32:27.687760 | orchestrator | Thursday 16 April 2026 05:32:22 +0000 (0:00:02.399) 0:02:36.735 ******** 2026-04-16 05:32:27.687772 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:27.687784 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:27.687796 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:32:27.687808 | orchestrator | 2026-04-16 05:32:27.687848 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-16 05:32:27.687861 | orchestrator | Thursday 16 April 2026 05:32:24 +0000 (0:00:02.090) 0:02:38.826 ******** 2026-04-16 05:32:27.687873 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:27.687887 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:27.687899 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:27.687912 | orchestrator | 2026-04-16 05:32:27.687924 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-16 05:32:27.687937 | orchestrator | Thursday 16 April 2026 05:32:27 +0000 (0:00:02.710) 0:02:41.536 ******** 2026-04-16 05:32:27.687950 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:27.687962 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:32:27.687975 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:32:27.687987 | orchestrator | 2026-04-16 05:32:27.687999 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:32:27.688013 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-04-16 05:32:27.688028 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-16 05:32:27.688040 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-04-16 05:32:27.688052 | orchestrator | 2026-04-16 05:32:27.688064 | orchestrator | 2026-04-16 05:32:27.688075 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:32:27.688088 | orchestrator | Thursday 16 April 2026 05:32:27 +0000 (0:00:00.356) 0:02:41.893 ******** 2026-04-16 05:32:27.688100 | orchestrator | =============================================================================== 2026-04-16 05:32:27.688129 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.35s 2026-04-16 05:32:27.688141 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 33.08s 2026-04-16 05:32:27.688153 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.79s 2026-04-16 05:32:27.688165 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.85s 2026-04-16 05:32:27.688177 | orchestrator | mariadb : Restart MariaDB container ------------------------------------- 9.69s 2026-04-16 05:32:27.688188 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.54s 2026-04-16 05:32:27.688202 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.56s 2026-04-16 05:32:27.688214 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.53s 2026-04-16 05:32:27.688226 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.12s 2026-04-16 05:32:27.688237 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.71s 2026-04-16 05:32:27.688249 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.64s 2026-04-16 05:32:27.688261 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.52s 2026-04-16 05:32:27.688273 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.40s 2026-04-16 05:32:27.688285 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.34s 2026-04-16 05:32:27.688298 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.34s 2026-04-16 05:32:27.688310 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.25s 2026-04-16 05:32:27.688323 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.23s 2026-04-16 05:32:27.688334 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.17s 2026-04-16 05:32:27.688346 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.14s 2026-04-16 05:32:27.688358 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.10s 2026-04-16 05:32:29.913401 | orchestrator | 2026-04-16 05:32:29 | INFO  | Task 2a16e9e2-b57a-4a0c-9864-ebad18fec5fa (rabbitmq) was prepared for execution. 2026-04-16 05:32:29.913557 | orchestrator | 2026-04-16 05:32:29 | INFO  | It takes a moment until task 2a16e9e2-b57a-4a0c-9864-ebad18fec5fa (rabbitmq) has been started and output is visible here. 2026-04-16 05:32:42.429910 | orchestrator | 2026-04-16 05:32:42.430080 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:32:42.430101 | orchestrator | 2026-04-16 05:32:42.430113 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:32:42.430125 | orchestrator | Thursday 16 April 2026 05:32:33 +0000 (0:00:00.166) 0:00:00.166 ******** 2026-04-16 05:32:42.430136 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:42.430148 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:32:42.430159 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:32:42.430170 | orchestrator | 2026-04-16 05:32:42.431080 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:32:42.431188 | orchestrator | Thursday 16 April 2026 05:32:34 +0000 (0:00:00.269) 0:00:00.436 ******** 2026-04-16 05:32:42.431207 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-16 05:32:42.431220 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-16 05:32:42.431231 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-16 05:32:42.431242 | orchestrator | 2026-04-16 05:32:42.431253 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-16 05:32:42.431265 | orchestrator | 2026-04-16 05:32:42.431276 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-16 05:32:42.431287 | orchestrator | Thursday 16 April 2026 05:32:34 +0000 (0:00:00.515) 0:00:00.952 ******** 2026-04-16 05:32:42.431299 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:32:42.431311 | orchestrator | 2026-04-16 05:32:42.431322 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-16 05:32:42.431332 | orchestrator | Thursday 16 April 2026 05:32:35 +0000 (0:00:00.478) 0:00:01.431 ******** 2026-04-16 05:32:42.431343 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:42.431354 | orchestrator | 2026-04-16 05:32:42.431365 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-16 05:32:42.431375 | orchestrator | Thursday 16 April 2026 05:32:36 +0000 (0:00:00.920) 0:00:02.351 ******** 2026-04-16 05:32:42.431387 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:42.431398 | orchestrator | 2026-04-16 05:32:42.431409 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-16 05:32:42.431420 | orchestrator | Thursday 16 April 2026 05:32:36 +0000 (0:00:00.364) 0:00:02.716 ******** 2026-04-16 05:32:42.431431 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:42.431441 | orchestrator | 2026-04-16 05:32:42.431452 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-16 05:32:42.431463 | orchestrator | Thursday 16 April 2026 05:32:36 +0000 (0:00:00.346) 0:00:03.062 ******** 2026-04-16 05:32:42.431506 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:42.431521 | orchestrator | 2026-04-16 05:32:42.431532 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-16 05:32:42.431543 | orchestrator | Thursday 16 April 2026 05:32:37 +0000 (0:00:00.352) 0:00:03.414 ******** 2026-04-16 05:32:42.431553 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:42.431564 | orchestrator | 2026-04-16 05:32:42.431575 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-16 05:32:42.431586 | orchestrator | Thursday 16 April 2026 05:32:37 +0000 (0:00:00.529) 0:00:03.943 ******** 2026-04-16 05:32:42.431615 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:32:42.431626 | orchestrator | 2026-04-16 05:32:42.431662 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-16 05:32:42.431673 | orchestrator | Thursday 16 April 2026 05:32:38 +0000 (0:00:00.875) 0:00:04.819 ******** 2026-04-16 05:32:42.431684 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:32:42.431695 | orchestrator | 2026-04-16 05:32:42.431706 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-16 05:32:42.431716 | orchestrator | Thursday 16 April 2026 05:32:39 +0000 (0:00:00.801) 0:00:05.621 ******** 2026-04-16 05:32:42.431727 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:42.431738 | orchestrator | 2026-04-16 05:32:42.431749 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-16 05:32:42.431761 | orchestrator | Thursday 16 April 2026 05:32:39 +0000 (0:00:00.355) 0:00:05.976 ******** 2026-04-16 05:32:42.431771 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:32:42.431782 | orchestrator | 2026-04-16 05:32:42.431793 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-16 05:32:42.431803 | orchestrator | Thursday 16 April 2026 05:32:40 +0000 (0:00:00.374) 0:00:06.351 ******** 2026-04-16 05:32:42.431848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:32:42.431865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:32:42.431878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:32:42.431898 | orchestrator | 2026-04-16 05:32:42.431919 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-16 05:32:42.431939 | orchestrator | Thursday 16 April 2026 05:32:40 +0000 (0:00:00.762) 0:00:07.114 ******** 2026-04-16 05:32:42.431961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:32:42.431994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:33:00.225707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:33:00.225827 | orchestrator | 2026-04-16 05:33:00.225846 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-16 05:33:00.225867 | orchestrator | Thursday 16 April 2026 05:32:42 +0000 (0:00:01.591) 0:00:08.705 ******** 2026-04-16 05:33:00.225917 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-16 05:33:00.225938 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-16 05:33:00.225955 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-16 05:33:00.225973 | orchestrator | 2026-04-16 05:33:00.225991 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-16 05:33:00.226009 | orchestrator | Thursday 16 April 2026 05:32:43 +0000 (0:00:01.466) 0:00:10.172 ******** 2026-04-16 05:33:00.226104 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-16 05:33:00.226141 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-16 05:33:00.226159 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-16 05:33:00.226177 | orchestrator | 2026-04-16 05:33:00.226195 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-16 05:33:00.226212 | orchestrator | Thursday 16 April 2026 05:32:45 +0000 (0:00:01.577) 0:00:11.749 ******** 2026-04-16 05:33:00.226231 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-16 05:33:00.226250 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-16 05:33:00.226273 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-16 05:33:00.226294 | orchestrator | 2026-04-16 05:33:00.226314 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-16 05:33:00.226335 | orchestrator | Thursday 16 April 2026 05:32:46 +0000 (0:00:01.278) 0:00:13.028 ******** 2026-04-16 05:33:00.226357 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-16 05:33:00.226378 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-16 05:33:00.226399 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-16 05:33:00.226420 | orchestrator | 2026-04-16 05:33:00.226441 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-16 05:33:00.226461 | orchestrator | Thursday 16 April 2026 05:32:48 +0000 (0:00:01.620) 0:00:14.648 ******** 2026-04-16 05:33:00.226527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-16 05:33:00.226551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-16 05:33:00.226569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-16 05:33:00.226588 | orchestrator | 2026-04-16 05:33:00.226607 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-16 05:33:00.226628 | orchestrator | Thursday 16 April 2026 05:32:49 +0000 (0:00:01.294) 0:00:15.943 ******** 2026-04-16 05:33:00.226647 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-16 05:33:00.226667 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-16 05:33:00.226686 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-16 05:33:00.226705 | orchestrator | 2026-04-16 05:33:00.226725 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-16 05:33:00.226745 | orchestrator | Thursday 16 April 2026 05:32:50 +0000 (0:00:01.290) 0:00:17.234 ******** 2026-04-16 05:33:00.226764 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:33:00.226785 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:33:00.226835 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:33:00.226878 | orchestrator | 2026-04-16 05:33:00.226898 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-04-16 05:33:00.226916 | orchestrator | Thursday 16 April 2026 05:32:51 +0000 (0:00:00.410) 0:00:17.644 ******** 2026-04-16 05:33:00.226938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:33:00.226971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:33:00.226992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 05:33:00.227011 | orchestrator | 2026-04-16 05:33:00.227029 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-04-16 05:33:00.227045 | orchestrator | Thursday 16 April 2026 05:32:52 +0000 (0:00:01.126) 0:00:18.770 ******** 2026-04-16 05:33:00.227062 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:33:00.227080 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:33:00.227097 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:33:00.227113 | orchestrator | 2026-04-16 05:33:00.227129 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-04-16 05:33:00.227157 | orchestrator | Thursday 16 April 2026 05:32:53 +0000 (0:00:00.781) 0:00:19.552 ******** 2026-04-16 05:33:00.227174 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:33:00.227191 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:33:00.227207 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:33:00.227224 | orchestrator | 2026-04-16 05:33:00.227241 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-16 05:33:00.227271 | orchestrator | Thursday 16 April 2026 05:33:00 +0000 (0:00:06.946) 0:00:26.498 ******** 2026-04-16 05:34:34.301080 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:34:34.301193 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:34:34.301208 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:34:34.301215 | orchestrator | 2026-04-16 05:34:34.301223 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-16 05:34:34.301231 | orchestrator | 2026-04-16 05:34:34.301238 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-16 05:34:34.301245 | orchestrator | Thursday 16 April 2026 05:33:00 +0000 (0:00:00.443) 0:00:26.942 ******** 2026-04-16 05:34:34.301251 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:34:34.301258 | orchestrator | 2026-04-16 05:34:34.301264 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-16 05:34:34.301271 | orchestrator | Thursday 16 April 2026 05:33:01 +0000 (0:00:00.603) 0:00:27.546 ******** 2026-04-16 05:34:34.301277 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:34:34.301283 | orchestrator | 2026-04-16 05:34:34.301290 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-16 05:34:34.301296 | orchestrator | Thursday 16 April 2026 05:33:01 +0000 (0:00:00.222) 0:00:27.768 ******** 2026-04-16 05:34:34.301302 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:34:34.301308 | orchestrator | 2026-04-16 05:34:34.301314 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-16 05:34:34.301321 | orchestrator | Thursday 16 April 2026 05:33:08 +0000 (0:00:06.595) 0:00:34.364 ******** 2026-04-16 05:34:34.301327 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:34:34.301333 | orchestrator | 2026-04-16 05:34:34.301340 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-16 05:34:34.301346 | orchestrator | 2026-04-16 05:34:34.301352 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-16 05:34:34.301358 | orchestrator | Thursday 16 April 2026 05:33:57 +0000 (0:00:49.420) 0:01:23.785 ******** 2026-04-16 05:34:34.301365 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:34:34.301371 | orchestrator | 2026-04-16 05:34:34.301377 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-16 05:34:34.301383 | orchestrator | Thursday 16 April 2026 05:33:58 +0000 (0:00:00.594) 0:01:24.379 ******** 2026-04-16 05:34:34.301389 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:34:34.301395 | orchestrator | 2026-04-16 05:34:34.301402 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-16 05:34:34.301408 | orchestrator | Thursday 16 April 2026 05:33:58 +0000 (0:00:00.220) 0:01:24.600 ******** 2026-04-16 05:34:34.301414 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:34:34.301420 | orchestrator | 2026-04-16 05:34:34.301426 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-16 05:34:34.301445 | orchestrator | Thursday 16 April 2026 05:33:59 +0000 (0:00:01.514) 0:01:26.115 ******** 2026-04-16 05:34:34.301452 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:34:34.301458 | orchestrator | 2026-04-16 05:34:34.301464 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-16 05:34:34.301471 | orchestrator | 2026-04-16 05:34:34.301477 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-16 05:34:34.301483 | orchestrator | Thursday 16 April 2026 05:34:13 +0000 (0:00:13.512) 0:01:39.627 ******** 2026-04-16 05:34:34.301489 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:34:34.301496 | orchestrator | 2026-04-16 05:34:34.301502 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-16 05:34:34.301560 | orchestrator | Thursday 16 April 2026 05:34:14 +0000 (0:00:00.727) 0:01:40.354 ******** 2026-04-16 05:34:34.301568 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:34:34.301574 | orchestrator | 2026-04-16 05:34:34.301580 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-16 05:34:34.301587 | orchestrator | Thursday 16 April 2026 05:34:14 +0000 (0:00:00.219) 0:01:40.574 ******** 2026-04-16 05:34:34.301593 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:34:34.301599 | orchestrator | 2026-04-16 05:34:34.301606 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-16 05:34:34.301612 | orchestrator | Thursday 16 April 2026 05:34:20 +0000 (0:00:06.561) 0:01:47.135 ******** 2026-04-16 05:34:34.301618 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:34:34.301624 | orchestrator | 2026-04-16 05:34:34.301631 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-16 05:34:34.301638 | orchestrator | 2026-04-16 05:34:34.301645 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-16 05:34:34.301652 | orchestrator | Thursday 16 April 2026 05:34:30 +0000 (0:00:09.974) 0:01:57.109 ******** 2026-04-16 05:34:34.301659 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:34:34.301666 | orchestrator | 2026-04-16 05:34:34.301673 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-16 05:34:34.301680 | orchestrator | Thursday 16 April 2026 05:34:31 +0000 (0:00:00.456) 0:01:57.565 ******** 2026-04-16 05:34:34.301687 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-16 05:34:34.301694 | orchestrator | enable_outward_rabbitmq_True 2026-04-16 05:34:34.301701 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-16 05:34:34.301709 | orchestrator | outward_rabbitmq_restart 2026-04-16 05:34:34.301716 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:34:34.301723 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:34:34.301730 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:34:34.301737 | orchestrator | 2026-04-16 05:34:34.301744 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-04-16 05:34:34.301752 | orchestrator | skipping: no hosts matched 2026-04-16 05:34:34.301759 | orchestrator | 2026-04-16 05:34:34.301766 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-04-16 05:34:34.301773 | orchestrator | skipping: no hosts matched 2026-04-16 05:34:34.301780 | orchestrator | 2026-04-16 05:34:34.301787 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-04-16 05:34:34.301795 | orchestrator | skipping: no hosts matched 2026-04-16 05:34:34.301802 | orchestrator | 2026-04-16 05:34:34.301809 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:34:34.301837 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-16 05:34:34.301852 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:34:34.301863 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:34:34.301876 | orchestrator | 2026-04-16 05:34:34.301887 | orchestrator | 2026-04-16 05:34:34.301898 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:34:34.301909 | orchestrator | Thursday 16 April 2026 05:34:34 +0000 (0:00:02.746) 0:02:00.312 ******** 2026-04-16 05:34:34.301921 | orchestrator | =============================================================================== 2026-04-16 05:34:34.301933 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 72.91s 2026-04-16 05:34:34.301945 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 14.67s 2026-04-16 05:34:34.301963 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.95s 2026-04-16 05:34:34.301971 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.75s 2026-04-16 05:34:34.301978 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.93s 2026-04-16 05:34:34.301985 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.62s 2026-04-16 05:34:34.301993 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.59s 2026-04-16 05:34:34.301999 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.58s 2026-04-16 05:34:34.302006 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2026-04-16 05:34:34.302012 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.29s 2026-04-16 05:34:34.302057 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.29s 2026-04-16 05:34:34.302064 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.28s 2026-04-16 05:34:34.302070 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.13s 2026-04-16 05:34:34.302076 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2026-04-16 05:34:34.302087 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.88s 2026-04-16 05:34:34.302094 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.80s 2026-04-16 05:34:34.302100 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.78s 2026-04-16 05:34:34.302106 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.76s 2026-04-16 05:34:34.302112 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.66s 2026-04-16 05:34:34.302118 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.53s 2026-04-16 05:34:36.452735 | orchestrator | 2026-04-16 05:34:36 | INFO  | Task 81089b2a-e1a9-49e3-b8ac-0b223cf44515 (openvswitch) was prepared for execution. 2026-04-16 05:34:36.452836 | orchestrator | 2026-04-16 05:34:36 | INFO  | It takes a moment until task 81089b2a-e1a9-49e3-b8ac-0b223cf44515 (openvswitch) has been started and output is visible here. 2026-04-16 05:34:46.330223 | orchestrator | 2026-04-16 05:34:46.330332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:34:46.330348 | orchestrator | 2026-04-16 05:34:46.330360 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:34:46.330375 | orchestrator | Thursday 16 April 2026 05:34:39 +0000 (0:00:00.184) 0:00:00.184 ******** 2026-04-16 05:34:46.330394 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:34:46.330426 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:34:46.330444 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:34:46.330462 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:34:46.330482 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:34:46.330502 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:34:46.330615 | orchestrator | 2026-04-16 05:34:46.330631 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:34:46.330642 | orchestrator | Thursday 16 April 2026 05:34:39 +0000 (0:00:00.478) 0:00:00.662 ******** 2026-04-16 05:34:46.330654 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 05:34:46.330666 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 05:34:46.330679 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 05:34:46.330691 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 05:34:46.330706 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 05:34:46.330725 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 05:34:46.330743 | orchestrator | 2026-04-16 05:34:46.330797 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-16 05:34:46.330816 | orchestrator | 2026-04-16 05:34:46.330836 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-16 05:34:46.330854 | orchestrator | Thursday 16 April 2026 05:34:40 +0000 (0:00:00.409) 0:00:01.072 ******** 2026-04-16 05:34:46.330873 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:34:46.330892 | orchestrator | 2026-04-16 05:34:46.330911 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 05:34:46.330927 | orchestrator | Thursday 16 April 2026 05:34:41 +0000 (0:00:00.948) 0:00:02.020 ******** 2026-04-16 05:34:46.330945 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-16 05:34:46.330964 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-16 05:34:46.330981 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-16 05:34:46.330999 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-16 05:34:46.331017 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-16 05:34:46.331035 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-16 05:34:46.331053 | orchestrator | 2026-04-16 05:34:46.331069 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 05:34:46.331087 | orchestrator | Thursday 16 April 2026 05:34:42 +0000 (0:00:00.927) 0:00:02.947 ******** 2026-04-16 05:34:46.331104 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-04-16 05:34:46.331122 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-04-16 05:34:46.331140 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-04-16 05:34:46.331159 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-04-16 05:34:46.331178 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-04-16 05:34:46.331197 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-04-16 05:34:46.331215 | orchestrator | 2026-04-16 05:34:46.331233 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 05:34:46.331250 | orchestrator | Thursday 16 April 2026 05:34:43 +0000 (0:00:01.364) 0:00:04.312 ******** 2026-04-16 05:34:46.331269 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-16 05:34:46.331288 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:34:46.331310 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-16 05:34:46.331329 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:34:46.331349 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-16 05:34:46.331368 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:34:46.331387 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-16 05:34:46.331408 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:34:46.331427 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-16 05:34:46.331445 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:34:46.331462 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-16 05:34:46.331479 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:34:46.331496 | orchestrator | 2026-04-16 05:34:46.331545 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-16 05:34:46.331566 | orchestrator | Thursday 16 April 2026 05:34:44 +0000 (0:00:00.940) 0:00:05.253 ******** 2026-04-16 05:34:46.331585 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:34:46.331603 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:34:46.331621 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:34:46.331638 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:34:46.331655 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:34:46.331672 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:34:46.331692 | orchestrator | 2026-04-16 05:34:46.331712 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-16 05:34:46.331755 | orchestrator | Thursday 16 April 2026 05:34:45 +0000 (0:00:00.574) 0:00:05.827 ******** 2026-04-16 05:34:46.331811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:46.331839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:46.331859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:46.331927 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:46.331957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:46.331992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561883 | orchestrator | 2026-04-16 05:34:48.561888 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-16 05:34:48.561894 | orchestrator | Thursday 16 April 2026 05:34:46 +0000 (0:00:01.300) 0:00:07.128 ******** 2026-04-16 05:34:48.561898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561912 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561923 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:48.561932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137816 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137829 | orchestrator | 2026-04-16 05:34:51.137842 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-16 05:34:51.137855 | orchestrator | Thursday 16 April 2026 05:34:48 +0000 (0:00:02.222) 0:00:09.351 ******** 2026-04-16 05:34:51.137866 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:34:51.137878 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:34:51.137888 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:34:51.137899 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:34:51.137910 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:34:51.137920 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:34:51.137932 | orchestrator | 2026-04-16 05:34:51.137944 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-04-16 05:34:51.137955 | orchestrator | Thursday 16 April 2026 05:34:49 +0000 (0:00:00.918) 0:00:10.269 ******** 2026-04-16 05:34:51.137966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:51.137979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:51.138003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:51.138074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:34:51.138099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 05:35:13.756778 | orchestrator | 2026-04-16 05:35:13.756794 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 05:35:13.756810 | orchestrator | Thursday 16 April 2026 05:34:51 +0000 (0:00:01.661) 0:00:11.930 ******** 2026-04-16 05:35:13.756824 | orchestrator | 2026-04-16 05:35:13.756837 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 05:35:13.756851 | orchestrator | Thursday 16 April 2026 05:34:51 +0000 (0:00:00.274) 0:00:12.205 ******** 2026-04-16 05:35:13.756864 | orchestrator | 2026-04-16 05:35:13.756888 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 05:35:13.756902 | orchestrator | Thursday 16 April 2026 05:34:51 +0000 (0:00:00.125) 0:00:12.330 ******** 2026-04-16 05:35:13.756915 | orchestrator | 2026-04-16 05:35:13.756928 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 05:35:13.756942 | orchestrator | Thursday 16 April 2026 05:34:51 +0000 (0:00:00.124) 0:00:12.455 ******** 2026-04-16 05:35:13.756957 | orchestrator | 2026-04-16 05:35:13.756972 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 05:35:13.756987 | orchestrator | Thursday 16 April 2026 05:34:51 +0000 (0:00:00.122) 0:00:12.577 ******** 2026-04-16 05:35:13.757003 | orchestrator | 2026-04-16 05:35:13.757019 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 05:35:13.757036 | orchestrator | Thursday 16 April 2026 05:34:51 +0000 (0:00:00.123) 0:00:12.700 ******** 2026-04-16 05:35:13.757051 | orchestrator | 2026-04-16 05:35:13.757067 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-16 05:35:13.757086 | orchestrator | Thursday 16 April 2026 05:34:52 +0000 (0:00:00.123) 0:00:12.824 ******** 2026-04-16 05:35:13.757104 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:35:13.757117 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:35:13.757131 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:35:13.757145 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:35:13.757159 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:35:13.757175 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:35:13.757189 | orchestrator | 2026-04-16 05:35:13.757203 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-16 05:35:13.757219 | orchestrator | Thursday 16 April 2026 05:34:58 +0000 (0:00:06.824) 0:00:19.648 ******** 2026-04-16 05:35:13.757234 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:35:13.757260 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:35:13.757275 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:35:13.757288 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:35:13.757301 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:35:13.757314 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:35:13.757327 | orchestrator | 2026-04-16 05:35:13.757339 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-16 05:35:13.757350 | orchestrator | Thursday 16 April 2026 05:34:59 +0000 (0:00:01.008) 0:00:20.657 ******** 2026-04-16 05:35:13.757361 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:35:13.757372 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:35:13.757383 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:35:13.757394 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:35:13.757406 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:35:13.757417 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:35:13.757430 | orchestrator | 2026-04-16 05:35:13.757443 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-16 05:35:13.757455 | orchestrator | Thursday 16 April 2026 05:35:07 +0000 (0:00:07.892) 0:00:28.550 ******** 2026-04-16 05:35:13.757468 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-16 05:35:13.757482 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-16 05:35:13.757495 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-16 05:35:13.757508 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-16 05:35:13.757521 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-16 05:35:13.757559 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-16 05:35:13.757572 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-16 05:35:13.757610 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-16 05:35:26.188768 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-16 05:35:26.188872 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-16 05:35:26.188885 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-16 05:35:26.188896 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-16 05:35:26.188906 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 05:35:26.188916 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 05:35:26.188926 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 05:35:26.188935 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 05:35:26.188945 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 05:35:26.188954 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 05:35:26.188964 | orchestrator | 2026-04-16 05:35:26.188975 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-16 05:35:26.188986 | orchestrator | Thursday 16 April 2026 05:35:13 +0000 (0:00:05.908) 0:00:34.458 ******** 2026-04-16 05:35:26.188997 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-16 05:35:26.189008 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:35:26.189018 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-16 05:35:26.189028 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:35:26.189052 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-16 05:35:26.189072 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:35:26.189082 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-04-16 05:35:26.189092 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-04-16 05:35:26.189101 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-04-16 05:35:26.189111 | orchestrator | 2026-04-16 05:35:26.189121 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-16 05:35:26.189131 | orchestrator | Thursday 16 April 2026 05:35:16 +0000 (0:00:02.272) 0:00:36.731 ******** 2026-04-16 05:35:26.189141 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-16 05:35:26.189150 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:35:26.189160 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-16 05:35:26.189169 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:35:26.189179 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-16 05:35:26.189189 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:35:26.189198 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-16 05:35:26.189208 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-16 05:35:26.189234 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-16 05:35:26.189244 | orchestrator | 2026-04-16 05:35:26.189253 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-16 05:35:26.189263 | orchestrator | Thursday 16 April 2026 05:35:18 +0000 (0:00:02.955) 0:00:39.687 ******** 2026-04-16 05:35:26.189273 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:35:26.189282 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:35:26.189292 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:35:26.189321 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:35:26.189331 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:35:26.189341 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:35:26.189350 | orchestrator | 2026-04-16 05:35:26.189360 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:35:26.189371 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 05:35:26.189383 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 05:35:26.189393 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 05:35:26.189402 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 05:35:26.189412 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 05:35:26.189422 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 05:35:26.189431 | orchestrator | 2026-04-16 05:35:26.189441 | orchestrator | 2026-04-16 05:35:26.189451 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:35:26.189460 | orchestrator | Thursday 16 April 2026 05:35:25 +0000 (0:00:06.902) 0:00:46.590 ******** 2026-04-16 05:35:26.189487 | orchestrator | =============================================================================== 2026-04-16 05:35:26.189498 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.80s 2026-04-16 05:35:26.189507 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.82s 2026-04-16 05:35:26.189517 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 5.91s 2026-04-16 05:35:26.189545 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.96s 2026-04-16 05:35:26.189556 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.27s 2026-04-16 05:35:26.189565 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.22s 2026-04-16 05:35:26.189575 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.66s 2026-04-16 05:35:26.189585 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.36s 2026-04-16 05:35:26.189594 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.30s 2026-04-16 05:35:26.189604 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.01s 2026-04-16 05:35:26.189614 | orchestrator | openvswitch : include_tasks --------------------------------------------- 0.95s 2026-04-16 05:35:26.189623 | orchestrator | module-load : Drop module persistence ----------------------------------- 0.94s 2026-04-16 05:35:26.189633 | orchestrator | module-load : Load modules ---------------------------------------------- 0.93s 2026-04-16 05:35:26.189643 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.92s 2026-04-16 05:35:26.189652 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.89s 2026-04-16 05:35:26.189662 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.57s 2026-04-16 05:35:26.189672 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2026-04-16 05:35:26.189681 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2026-04-16 05:35:28.342476 | orchestrator | 2026-04-16 05:35:28 | INFO  | Task 579cafff-e088-45b8-95c5-5d41bc512956 (ovn) was prepared for execution. 2026-04-16 05:35:28.342617 | orchestrator | 2026-04-16 05:35:28 | INFO  | It takes a moment until task 579cafff-e088-45b8-95c5-5d41bc512956 (ovn) has been started and output is visible here. 2026-04-16 05:35:38.371382 | orchestrator | 2026-04-16 05:35:38.371495 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:35:38.371514 | orchestrator | 2026-04-16 05:35:38.371527 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:35:38.371624 | orchestrator | Thursday 16 April 2026 05:35:32 +0000 (0:00:00.155) 0:00:00.155 ******** 2026-04-16 05:35:38.371637 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:35:38.371651 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:35:38.371663 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:35:38.371676 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:35:38.371689 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:35:38.371702 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:35:38.371715 | orchestrator | 2026-04-16 05:35:38.371727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:35:38.371740 | orchestrator | Thursday 16 April 2026 05:35:32 +0000 (0:00:00.649) 0:00:00.804 ******** 2026-04-16 05:35:38.371770 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-16 05:35:38.371784 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-16 05:35:38.371796 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-16 05:35:38.371808 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-16 05:35:38.371821 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-16 05:35:38.371833 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-16 05:35:38.371845 | orchestrator | 2026-04-16 05:35:38.371859 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-16 05:35:38.371871 | orchestrator | 2026-04-16 05:35:38.371885 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-16 05:35:38.371898 | orchestrator | Thursday 16 April 2026 05:35:33 +0000 (0:00:00.815) 0:00:01.620 ******** 2026-04-16 05:35:38.371912 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:35:38.371927 | orchestrator | 2026-04-16 05:35:38.371940 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-16 05:35:38.371953 | orchestrator | Thursday 16 April 2026 05:35:34 +0000 (0:00:01.006) 0:00:02.626 ******** 2026-04-16 05:35:38.371969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.371985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372000 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372098 | orchestrator | 2026-04-16 05:35:38.372111 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-16 05:35:38.372123 | orchestrator | Thursday 16 April 2026 05:35:35 +0000 (0:00:01.115) 0:00:03.741 ******** 2026-04-16 05:35:38.372140 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372165 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372224 | orchestrator | 2026-04-16 05:35:38.372237 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-16 05:35:38.372249 | orchestrator | Thursday 16 April 2026 05:35:37 +0000 (0:00:01.427) 0:00:05.168 ******** 2026-04-16 05:35:38.372261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:35:38.372294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827536 | orchestrator | 2026-04-16 05:36:01.827584 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-16 05:36:01.827593 | orchestrator | Thursday 16 April 2026 05:35:38 +0000 (0:00:01.059) 0:00:06.228 ******** 2026-04-16 05:36:01.827599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827630 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827669 | orchestrator | 2026-04-16 05:36:01.827675 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-04-16 05:36:01.827682 | orchestrator | Thursday 16 April 2026 05:35:39 +0000 (0:00:01.540) 0:00:07.769 ******** 2026-04-16 05:36:01.827693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827706 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:01.827736 | orchestrator | 2026-04-16 05:36:01.827743 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-16 05:36:01.827749 | orchestrator | Thursday 16 April 2026 05:35:41 +0000 (0:00:01.245) 0:00:09.015 ******** 2026-04-16 05:36:01.827756 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:36:01.827763 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:36:01.827769 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:36:01.827775 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:36:01.827781 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:36:01.827787 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:36:01.827793 | orchestrator | 2026-04-16 05:36:01.827800 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-16 05:36:01.827806 | orchestrator | Thursday 16 April 2026 05:35:43 +0000 (0:00:02.465) 0:00:11.480 ******** 2026-04-16 05:36:01.827812 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-16 05:36:01.827820 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-16 05:36:01.827826 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-16 05:36:01.827832 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-16 05:36:01.827838 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-16 05:36:01.827844 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-16 05:36:01.827854 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 05:36:41.262940 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 05:36:41.263045 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 05:36:41.263056 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 05:36:41.263079 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 05:36:41.263088 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 05:36:41.263096 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-16 05:36:41.263106 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-16 05:36:41.263115 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-16 05:36:41.263142 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-16 05:36:41.263151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-16 05:36:41.263159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-04-16 05:36:41.263168 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 05:36:41.263177 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 05:36:41.263185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 05:36:41.263193 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 05:36:41.263202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 05:36:41.263210 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 05:36:41.263217 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 05:36:41.263225 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 05:36:41.263232 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 05:36:41.263240 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 05:36:41.263248 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 05:36:41.263256 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 05:36:41.263264 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 05:36:41.263272 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 05:36:41.263280 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 05:36:41.263289 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 05:36:41.263297 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 05:36:41.263305 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 05:36:41.263313 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-16 05:36:41.263321 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-16 05:36:41.263329 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-16 05:36:41.263337 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-16 05:36:41.263345 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-16 05:36:41.263353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-16 05:36:41.263361 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-16 05:36:41.263391 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-16 05:36:41.263399 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-16 05:36:41.263412 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-16 05:36:41.263420 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-16 05:36:41.263428 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-16 05:36:41.263436 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-16 05:36:41.263444 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-16 05:36:41.263452 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-16 05:36:41.263460 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-16 05:36:41.263468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-16 05:36:41.263477 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-16 05:36:41.263485 | orchestrator | 2026-04-16 05:36:41.263493 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 05:36:41.263502 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:17.672) 0:00:29.153 ******** 2026-04-16 05:36:41.263510 | orchestrator | 2026-04-16 05:36:41.263519 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 05:36:41.263544 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:00.212) 0:00:29.365 ******** 2026-04-16 05:36:41.263552 | orchestrator | 2026-04-16 05:36:41.263561 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 05:36:41.263569 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:00.062) 0:00:29.428 ******** 2026-04-16 05:36:41.263577 | orchestrator | 2026-04-16 05:36:41.263585 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 05:36:41.263593 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:00.061) 0:00:29.489 ******** 2026-04-16 05:36:41.263601 | orchestrator | 2026-04-16 05:36:41.263608 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 05:36:41.263616 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:00.060) 0:00:29.549 ******** 2026-04-16 05:36:41.263624 | orchestrator | 2026-04-16 05:36:41.263632 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 05:36:41.263639 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:00.060) 0:00:29.610 ******** 2026-04-16 05:36:41.263647 | orchestrator | 2026-04-16 05:36:41.263656 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-04-16 05:36:41.263664 | orchestrator | Thursday 16 April 2026 05:36:01 +0000 (0:00:00.061) 0:00:29.671 ******** 2026-04-16 05:36:41.263671 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:36:41.263681 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:36:41.263688 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:36:41.263696 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:41.263704 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:41.263712 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:41.263720 | orchestrator | 2026-04-16 05:36:41.263727 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-16 05:36:41.263735 | orchestrator | Thursday 16 April 2026 05:36:03 +0000 (0:00:01.539) 0:00:31.210 ******** 2026-04-16 05:36:41.263749 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:36:41.263757 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:36:41.263765 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:36:41.263773 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:36:41.263780 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:36:41.263788 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:36:41.263796 | orchestrator | 2026-04-16 05:36:41.263804 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-16 05:36:41.263811 | orchestrator | 2026-04-16 05:36:41.263819 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-16 05:36:41.263827 | orchestrator | Thursday 16 April 2026 05:36:39 +0000 (0:00:35.883) 0:01:07.093 ******** 2026-04-16 05:36:41.263835 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:36:41.263843 | orchestrator | 2026-04-16 05:36:41.263851 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-16 05:36:41.263858 | orchestrator | Thursday 16 April 2026 05:36:39 +0000 (0:00:00.633) 0:01:07.727 ******** 2026-04-16 05:36:41.263866 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:36:41.263874 | orchestrator | 2026-04-16 05:36:41.263882 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-16 05:36:41.263889 | orchestrator | Thursday 16 April 2026 05:36:40 +0000 (0:00:00.500) 0:01:08.227 ******** 2026-04-16 05:36:41.263897 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:41.263905 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:41.263912 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:41.263920 | orchestrator | 2026-04-16 05:36:41.263929 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-16 05:36:41.263942 | orchestrator | Thursday 16 April 2026 05:36:41 +0000 (0:00:00.884) 0:01:09.112 ******** 2026-04-16 05:36:51.357170 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:51.357283 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:51.357298 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:51.357311 | orchestrator | 2026-04-16 05:36:51.357323 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-16 05:36:51.357368 | orchestrator | Thursday 16 April 2026 05:36:41 +0000 (0:00:00.306) 0:01:09.419 ******** 2026-04-16 05:36:51.357381 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:51.357392 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:51.357402 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:51.357413 | orchestrator | 2026-04-16 05:36:51.357424 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-16 05:36:51.357436 | orchestrator | Thursday 16 April 2026 05:36:41 +0000 (0:00:00.313) 0:01:09.732 ******** 2026-04-16 05:36:51.357447 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:51.357457 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:51.357468 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:51.357479 | orchestrator | 2026-04-16 05:36:51.357489 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-16 05:36:51.357500 | orchestrator | Thursday 16 April 2026 05:36:42 +0000 (0:00:00.290) 0:01:10.022 ******** 2026-04-16 05:36:51.357531 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:51.357543 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:51.357553 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:51.357564 | orchestrator | 2026-04-16 05:36:51.357575 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-16 05:36:51.357585 | orchestrator | Thursday 16 April 2026 05:36:42 +0000 (0:00:00.473) 0:01:10.496 ******** 2026-04-16 05:36:51.357596 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.357608 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.357618 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.357629 | orchestrator | 2026-04-16 05:36:51.357640 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-16 05:36:51.357674 | orchestrator | Thursday 16 April 2026 05:36:42 +0000 (0:00:00.279) 0:01:10.776 ******** 2026-04-16 05:36:51.357686 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.357697 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.357707 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.357718 | orchestrator | 2026-04-16 05:36:51.357729 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-16 05:36:51.357740 | orchestrator | Thursday 16 April 2026 05:36:43 +0000 (0:00:00.300) 0:01:11.077 ******** 2026-04-16 05:36:51.357751 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.357761 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.357772 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.357782 | orchestrator | 2026-04-16 05:36:51.357793 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-16 05:36:51.357804 | orchestrator | Thursday 16 April 2026 05:36:43 +0000 (0:00:00.268) 0:01:11.345 ******** 2026-04-16 05:36:51.357815 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.357826 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.357836 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.357847 | orchestrator | 2026-04-16 05:36:51.357858 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-16 05:36:51.357868 | orchestrator | Thursday 16 April 2026 05:36:43 +0000 (0:00:00.269) 0:01:11.615 ******** 2026-04-16 05:36:51.357879 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.357890 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.357901 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.357912 | orchestrator | 2026-04-16 05:36:51.357923 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-16 05:36:51.357934 | orchestrator | Thursday 16 April 2026 05:36:44 +0000 (0:00:00.431) 0:01:12.047 ******** 2026-04-16 05:36:51.357944 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.357955 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.357966 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.357976 | orchestrator | 2026-04-16 05:36:51.357987 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-16 05:36:51.357998 | orchestrator | Thursday 16 April 2026 05:36:44 +0000 (0:00:00.281) 0:01:12.328 ******** 2026-04-16 05:36:51.358009 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358079 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358091 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358101 | orchestrator | 2026-04-16 05:36:51.358112 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-16 05:36:51.358123 | orchestrator | Thursday 16 April 2026 05:36:44 +0000 (0:00:00.268) 0:01:12.597 ******** 2026-04-16 05:36:51.358134 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358145 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358155 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358166 | orchestrator | 2026-04-16 05:36:51.358177 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-16 05:36:51.358188 | orchestrator | Thursday 16 April 2026 05:36:44 +0000 (0:00:00.256) 0:01:12.854 ******** 2026-04-16 05:36:51.358198 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358209 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358220 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358230 | orchestrator | 2026-04-16 05:36:51.358241 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-16 05:36:51.358252 | orchestrator | Thursday 16 April 2026 05:36:45 +0000 (0:00:00.429) 0:01:13.284 ******** 2026-04-16 05:36:51.358263 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358273 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358284 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358295 | orchestrator | 2026-04-16 05:36:51.358306 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-16 05:36:51.358317 | orchestrator | Thursday 16 April 2026 05:36:45 +0000 (0:00:00.285) 0:01:13.569 ******** 2026-04-16 05:36:51.358336 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358347 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358358 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358369 | orchestrator | 2026-04-16 05:36:51.358379 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-16 05:36:51.358390 | orchestrator | Thursday 16 April 2026 05:36:45 +0000 (0:00:00.288) 0:01:13.857 ******** 2026-04-16 05:36:51.358421 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358432 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358443 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358454 | orchestrator | 2026-04-16 05:36:51.358465 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-16 05:36:51.358482 | orchestrator | Thursday 16 April 2026 05:36:46 +0000 (0:00:00.268) 0:01:14.125 ******** 2026-04-16 05:36:51.358494 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:36:51.358504 | orchestrator | 2026-04-16 05:36:51.358555 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-04-16 05:36:51.358567 | orchestrator | Thursday 16 April 2026 05:36:46 +0000 (0:00:00.657) 0:01:14.783 ******** 2026-04-16 05:36:51.358578 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:51.358589 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:51.358599 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:51.358610 | orchestrator | 2026-04-16 05:36:51.358621 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-04-16 05:36:51.358632 | orchestrator | Thursday 16 April 2026 05:36:47 +0000 (0:00:00.424) 0:01:15.208 ******** 2026-04-16 05:36:51.358643 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:36:51.358653 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:36:51.358664 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:36:51.358675 | orchestrator | 2026-04-16 05:36:51.358686 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-04-16 05:36:51.358697 | orchestrator | Thursday 16 April 2026 05:36:47 +0000 (0:00:00.413) 0:01:15.621 ******** 2026-04-16 05:36:51.358708 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358719 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358729 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358740 | orchestrator | 2026-04-16 05:36:51.358751 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-04-16 05:36:51.358762 | orchestrator | Thursday 16 April 2026 05:36:48 +0000 (0:00:00.299) 0:01:15.920 ******** 2026-04-16 05:36:51.358773 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358783 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358794 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358805 | orchestrator | 2026-04-16 05:36:51.358816 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-04-16 05:36:51.358827 | orchestrator | Thursday 16 April 2026 05:36:48 +0000 (0:00:00.480) 0:01:16.401 ******** 2026-04-16 05:36:51.358837 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358848 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358859 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358870 | orchestrator | 2026-04-16 05:36:51.358881 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-04-16 05:36:51.358891 | orchestrator | Thursday 16 April 2026 05:36:48 +0000 (0:00:00.310) 0:01:16.711 ******** 2026-04-16 05:36:51.358902 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358913 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.358924 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.358935 | orchestrator | 2026-04-16 05:36:51.358945 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-04-16 05:36:51.358956 | orchestrator | Thursday 16 April 2026 05:36:49 +0000 (0:00:00.325) 0:01:17.037 ******** 2026-04-16 05:36:51.358967 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.358989 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.359000 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.359010 | orchestrator | 2026-04-16 05:36:51.359021 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-04-16 05:36:51.359032 | orchestrator | Thursday 16 April 2026 05:36:49 +0000 (0:00:00.317) 0:01:17.355 ******** 2026-04-16 05:36:51.359043 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:36:51.359054 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:36:51.359065 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:36:51.359075 | orchestrator | 2026-04-16 05:36:51.359086 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-16 05:36:51.359097 | orchestrator | Thursday 16 April 2026 05:36:49 +0000 (0:00:00.492) 0:01:17.848 ******** 2026-04-16 05:36:51.359111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:51.359124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:51.359136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:51.359161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.320805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.320921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.320939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.320952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321032 | orchestrator | 2026-04-16 05:36:57.321052 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-16 05:36:57.321072 | orchestrator | Thursday 16 April 2026 05:36:51 +0000 (0:00:01.365) 0:01:19.213 ******** 2026-04-16 05:36:57.321097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321264 | orchestrator | 2026-04-16 05:36:57.321275 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-16 05:36:57.321286 | orchestrator | Thursday 16 April 2026 05:36:54 +0000 (0:00:03.641) 0:01:22.855 ******** 2026-04-16 05:36:57.321297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:36:57.321374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.914832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.914924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.914931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.914936 | orchestrator | 2026-04-16 05:37:15.914942 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 05:37:15.914948 | orchestrator | Thursday 16 April 2026 05:36:56 +0000 (0:00:01.973) 0:01:24.828 ******** 2026-04-16 05:37:15.914952 | orchestrator | 2026-04-16 05:37:15.914957 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 05:37:15.914961 | orchestrator | Thursday 16 April 2026 05:36:57 +0000 (0:00:00.061) 0:01:24.890 ******** 2026-04-16 05:37:15.914965 | orchestrator | 2026-04-16 05:37:15.914970 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 05:37:15.914974 | orchestrator | Thursday 16 April 2026 05:36:57 +0000 (0:00:00.058) 0:01:24.948 ******** 2026-04-16 05:37:15.914978 | orchestrator | 2026-04-16 05:37:15.914983 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-16 05:37:15.914987 | orchestrator | Thursday 16 April 2026 05:36:57 +0000 (0:00:00.218) 0:01:25.166 ******** 2026-04-16 05:37:15.914992 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:15.914997 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:37:15.915001 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:37:15.915006 | orchestrator | 2026-04-16 05:37:15.915010 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-16 05:37:15.915015 | orchestrator | Thursday 16 April 2026 05:36:59 +0000 (0:00:02.371) 0:01:27.537 ******** 2026-04-16 05:37:15.915019 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:15.915023 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:37:15.915028 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:37:15.915032 | orchestrator | 2026-04-16 05:37:15.915037 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-16 05:37:15.915041 | orchestrator | Thursday 16 April 2026 05:37:02 +0000 (0:00:02.390) 0:01:29.928 ******** 2026-04-16 05:37:15.915045 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:37:15.915050 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:15.915054 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:37:15.915058 | orchestrator | 2026-04-16 05:37:15.915063 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-16 05:37:15.915067 | orchestrator | Thursday 16 April 2026 05:37:09 +0000 (0:00:07.504) 0:01:37.432 ******** 2026-04-16 05:37:15.915071 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:37:15.915076 | orchestrator | 2026-04-16 05:37:15.915080 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-16 05:37:15.915085 | orchestrator | Thursday 16 April 2026 05:37:09 +0000 (0:00:00.108) 0:01:37.541 ******** 2026-04-16 05:37:15.915089 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:15.915094 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:15.915099 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:15.915103 | orchestrator | 2026-04-16 05:37:15.915108 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-16 05:37:15.915112 | orchestrator | Thursday 16 April 2026 05:37:10 +0000 (0:00:00.948) 0:01:38.490 ******** 2026-04-16 05:37:15.915117 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:37:15.915126 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:37:15.915130 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:15.915134 | orchestrator | 2026-04-16 05:37:15.915139 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-16 05:37:15.915143 | orchestrator | Thursday 16 April 2026 05:37:11 +0000 (0:00:00.647) 0:01:39.137 ******** 2026-04-16 05:37:15.915148 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:15.915152 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:15.915156 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:15.915161 | orchestrator | 2026-04-16 05:37:15.915177 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-16 05:37:15.915182 | orchestrator | Thursday 16 April 2026 05:37:12 +0000 (0:00:00.752) 0:01:39.890 ******** 2026-04-16 05:37:15.915203 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:37:15.915208 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:37:15.915213 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:15.915217 | orchestrator | 2026-04-16 05:37:15.915221 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-16 05:37:15.915226 | orchestrator | Thursday 16 April 2026 05:37:12 +0000 (0:00:00.604) 0:01:40.495 ******** 2026-04-16 05:37:15.915230 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:15.915234 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:15.915249 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:15.915254 | orchestrator | 2026-04-16 05:37:15.915259 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-16 05:37:15.915263 | orchestrator | Thursday 16 April 2026 05:37:13 +0000 (0:00:00.689) 0:01:41.184 ******** 2026-04-16 05:37:15.915267 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:15.915272 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:15.915276 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:15.915281 | orchestrator | 2026-04-16 05:37:15.915285 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-04-16 05:37:15.915290 | orchestrator | Thursday 16 April 2026 05:37:14 +0000 (0:00:00.952) 0:01:42.138 ******** 2026-04-16 05:37:15.915294 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:15.915299 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:15.915303 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:15.915307 | orchestrator | 2026-04-16 05:37:15.915311 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-16 05:37:15.915316 | orchestrator | Thursday 16 April 2026 05:37:14 +0000 (0:00:00.267) 0:01:42.405 ******** 2026-04-16 05:37:15.915322 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915329 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915333 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915338 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915349 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915356 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915364 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915375 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:15.915388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699032 | orchestrator | 2026-04-16 05:37:22.699140 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-16 05:37:22.699158 | orchestrator | Thursday 16 April 2026 05:37:15 +0000 (0:00:01.357) 0:01:43.762 ******** 2026-04-16 05:37:22.699172 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699198 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699210 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699271 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699318 | orchestrator | 2026-04-16 05:37:22.699329 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-04-16 05:37:22.699340 | orchestrator | Thursday 16 April 2026 05:37:19 +0000 (0:00:03.626) 0:01:47.389 ******** 2026-04-16 05:37:22.699370 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699383 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699394 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699405 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699457 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 05:37:22.699560 | orchestrator | 2026-04-16 05:37:22.699571 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 05:37:22.699582 | orchestrator | Thursday 16 April 2026 05:37:22 +0000 (0:00:02.955) 0:01:50.345 ******** 2026-04-16 05:37:22.699593 | orchestrator | 2026-04-16 05:37:22.699605 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 05:37:22.699615 | orchestrator | Thursday 16 April 2026 05:37:22 +0000 (0:00:00.062) 0:01:50.407 ******** 2026-04-16 05:37:22.699626 | orchestrator | 2026-04-16 05:37:22.699636 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 05:37:22.699647 | orchestrator | Thursday 16 April 2026 05:37:22 +0000 (0:00:00.062) 0:01:50.469 ******** 2026-04-16 05:37:22.699658 | orchestrator | 2026-04-16 05:37:22.699678 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-16 05:37:46.399570 | orchestrator | Thursday 16 April 2026 05:37:22 +0000 (0:00:00.076) 0:01:50.545 ******** 2026-04-16 05:37:46.399659 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:37:46.399670 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:37:46.399677 | orchestrator | 2026-04-16 05:37:46.399684 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-16 05:37:46.399692 | orchestrator | Thursday 16 April 2026 05:37:28 +0000 (0:00:06.166) 0:01:56.712 ******** 2026-04-16 05:37:46.399699 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:37:46.399705 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:37:46.399712 | orchestrator | 2026-04-16 05:37:46.399719 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-16 05:37:46.399745 | orchestrator | Thursday 16 April 2026 05:37:34 +0000 (0:00:06.135) 0:02:02.848 ******** 2026-04-16 05:37:46.399752 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:37:46.399759 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:37:46.399765 | orchestrator | 2026-04-16 05:37:46.399772 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-16 05:37:46.399779 | orchestrator | Thursday 16 April 2026 05:37:41 +0000 (0:00:06.149) 0:02:08.997 ******** 2026-04-16 05:37:46.399786 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:37:46.399792 | orchestrator | 2026-04-16 05:37:46.399799 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-16 05:37:46.399806 | orchestrator | Thursday 16 April 2026 05:37:41 +0000 (0:00:00.119) 0:02:09.117 ******** 2026-04-16 05:37:46.399813 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:46.399820 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:46.399827 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:46.399833 | orchestrator | 2026-04-16 05:37:46.399840 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-16 05:37:46.399847 | orchestrator | Thursday 16 April 2026 05:37:42 +0000 (0:00:01.005) 0:02:10.123 ******** 2026-04-16 05:37:46.399854 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:37:46.399860 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:37:46.399867 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:46.399873 | orchestrator | 2026-04-16 05:37:46.399880 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-16 05:37:46.399886 | orchestrator | Thursday 16 April 2026 05:37:42 +0000 (0:00:00.668) 0:02:10.791 ******** 2026-04-16 05:37:46.399894 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:46.399900 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:46.399907 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:46.399913 | orchestrator | 2026-04-16 05:37:46.399920 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-16 05:37:46.399927 | orchestrator | Thursday 16 April 2026 05:37:43 +0000 (0:00:00.743) 0:02:11.535 ******** 2026-04-16 05:37:46.399933 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:37:46.399940 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:37:46.399946 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:37:46.399953 | orchestrator | 2026-04-16 05:37:46.399960 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-16 05:37:46.399966 | orchestrator | Thursday 16 April 2026 05:37:44 +0000 (0:00:00.617) 0:02:12.153 ******** 2026-04-16 05:37:46.399973 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:46.399980 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:46.399986 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:46.399993 | orchestrator | 2026-04-16 05:37:46.399999 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-16 05:37:46.400006 | orchestrator | Thursday 16 April 2026 05:37:45 +0000 (0:00:00.919) 0:02:13.073 ******** 2026-04-16 05:37:46.400013 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:37:46.400019 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:37:46.400026 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:37:46.400032 | orchestrator | 2026-04-16 05:37:46.400038 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:37:46.400046 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-16 05:37:46.400054 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-16 05:37:46.400061 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-04-16 05:37:46.400070 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:37:46.400083 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:37:46.400091 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:37:46.400098 | orchestrator | 2026-04-16 05:37:46.400106 | orchestrator | 2026-04-16 05:37:46.400124 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:37:46.400133 | orchestrator | Thursday 16 April 2026 05:37:46 +0000 (0:00:00.844) 0:02:13.917 ******** 2026-04-16 05:37:46.400140 | orchestrator | =============================================================================== 2026-04-16 05:37:46.400148 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.88s 2026-04-16 05:37:46.400156 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.67s 2026-04-16 05:37:46.400163 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.65s 2026-04-16 05:37:46.400171 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.54s 2026-04-16 05:37:46.400179 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.53s 2026-04-16 05:37:46.400199 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.64s 2026-04-16 05:37:46.400208 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.63s 2026-04-16 05:37:46.400215 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.96s 2026-04-16 05:37:46.400223 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.47s 2026-04-16 05:37:46.400230 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.97s 2026-04-16 05:37:46.400238 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.54s 2026-04-16 05:37:46.400246 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.54s 2026-04-16 05:37:46.400253 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.43s 2026-04-16 05:37:46.400261 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.37s 2026-04-16 05:37:46.400269 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.36s 2026-04-16 05:37:46.400276 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.25s 2026-04-16 05:37:46.400284 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.12s 2026-04-16 05:37:46.400292 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.06s 2026-04-16 05:37:46.400299 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.01s 2026-04-16 05:37:46.400307 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.01s 2026-04-16 05:37:46.669011 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 05:37:46.669112 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-04-16 05:37:48.752061 | orchestrator | 2026-04-16 05:37:48 | INFO  | Trying to run play wipe-partitions in environment custom 2026-04-16 05:37:58.871730 | orchestrator | 2026-04-16 05:37:58 | INFO  | Task cd0145b7-b1e1-41ae-9818-e84f982527ac (wipe-partitions) was prepared for execution. 2026-04-16 05:37:58.871870 | orchestrator | 2026-04-16 05:37:58 | INFO  | It takes a moment until task cd0145b7-b1e1-41ae-9818-e84f982527ac (wipe-partitions) has been started and output is visible here. 2026-04-16 05:38:10.686124 | orchestrator | 2026-04-16 05:38:10.686286 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-04-16 05:38:10.686308 | orchestrator | 2026-04-16 05:38:10.686321 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-04-16 05:38:10.686333 | orchestrator | Thursday 16 April 2026 05:38:02 +0000 (0:00:00.096) 0:00:00.096 ******** 2026-04-16 05:38:10.686371 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:38:10.686384 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:38:10.686449 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:38:10.686462 | orchestrator | 2026-04-16 05:38:10.686474 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-04-16 05:38:10.686485 | orchestrator | Thursday 16 April 2026 05:38:03 +0000 (0:00:00.548) 0:00:00.645 ******** 2026-04-16 05:38:10.686495 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:10.686507 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:38:10.686517 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:38:10.686528 | orchestrator | 2026-04-16 05:38:10.686539 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-04-16 05:38:10.686550 | orchestrator | Thursday 16 April 2026 05:38:03 +0000 (0:00:00.302) 0:00:00.947 ******** 2026-04-16 05:38:10.686563 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:38:10.686577 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:10.686589 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:38:10.686601 | orchestrator | 2026-04-16 05:38:10.686614 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-04-16 05:38:10.686626 | orchestrator | Thursday 16 April 2026 05:38:04 +0000 (0:00:00.547) 0:00:01.495 ******** 2026-04-16 05:38:10.686638 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:10.686651 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:38:10.686665 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:38:10.686678 | orchestrator | 2026-04-16 05:38:10.686691 | orchestrator | TASK [Check device availability] *********************************************** 2026-04-16 05:38:10.686703 | orchestrator | Thursday 16 April 2026 05:38:04 +0000 (0:00:00.217) 0:00:01.712 ******** 2026-04-16 05:38:10.686716 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-16 05:38:10.686728 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-16 05:38:10.686741 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-16 05:38:10.686753 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-16 05:38:10.686765 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-16 05:38:10.686777 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-16 05:38:10.686789 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-16 05:38:10.686816 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-16 05:38:10.686828 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-16 05:38:10.686841 | orchestrator | 2026-04-16 05:38:10.686854 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-04-16 05:38:10.686867 | orchestrator | Thursday 16 April 2026 05:38:05 +0000 (0:00:01.178) 0:00:02.890 ******** 2026-04-16 05:38:10.686880 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-04-16 05:38:10.686892 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-04-16 05:38:10.686904 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-04-16 05:38:10.686917 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-04-16 05:38:10.686928 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-04-16 05:38:10.686939 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-04-16 05:38:10.686949 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-04-16 05:38:10.686960 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-04-16 05:38:10.686972 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-04-16 05:38:10.686991 | orchestrator | 2026-04-16 05:38:10.687020 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-04-16 05:38:10.687039 | orchestrator | Thursday 16 April 2026 05:38:07 +0000 (0:00:01.469) 0:00:04.359 ******** 2026-04-16 05:38:10.687057 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-04-16 05:38:10.687075 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-04-16 05:38:10.687094 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-04-16 05:38:10.687108 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-04-16 05:38:10.687149 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-04-16 05:38:10.687172 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-04-16 05:38:10.687190 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-04-16 05:38:10.687208 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-04-16 05:38:10.687228 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-04-16 05:38:10.687246 | orchestrator | 2026-04-16 05:38:10.687264 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-04-16 05:38:10.687278 | orchestrator | Thursday 16 April 2026 05:38:09 +0000 (0:00:02.095) 0:00:06.455 ******** 2026-04-16 05:38:10.687289 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:38:10.687300 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:38:10.687310 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:38:10.687321 | orchestrator | 2026-04-16 05:38:10.687332 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-04-16 05:38:10.687343 | orchestrator | Thursday 16 April 2026 05:38:09 +0000 (0:00:00.610) 0:00:07.065 ******** 2026-04-16 05:38:10.687354 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:38:10.687364 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:38:10.687375 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:38:10.687386 | orchestrator | 2026-04-16 05:38:10.687424 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:38:10.687437 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:10.687450 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:10.687483 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:10.687495 | orchestrator | 2026-04-16 05:38:10.687506 | orchestrator | 2026-04-16 05:38:10.687517 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:38:10.687528 | orchestrator | Thursday 16 April 2026 05:38:10 +0000 (0:00:00.624) 0:00:07.690 ******** 2026-04-16 05:38:10.687539 | orchestrator | =============================================================================== 2026-04-16 05:38:10.687550 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.10s 2026-04-16 05:38:10.687560 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.47s 2026-04-16 05:38:10.687574 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2026-04-16 05:38:10.687593 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-04-16 05:38:10.687611 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-04-16 05:38:10.687628 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2026-04-16 05:38:10.687644 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-04-16 05:38:10.687662 | orchestrator | Remove all rook related logical devices --------------------------------- 0.30s 2026-04-16 05:38:10.687680 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2026-04-16 05:38:23.011895 | orchestrator | 2026-04-16 05:38:23 | INFO  | Task b5826988-80b0-48ad-af54-0ef85892f8d9 (facts) was prepared for execution. 2026-04-16 05:38:23.012028 | orchestrator | 2026-04-16 05:38:23 | INFO  | It takes a moment until task b5826988-80b0-48ad-af54-0ef85892f8d9 (facts) has been started and output is visible here. 2026-04-16 05:38:35.772198 | orchestrator | 2026-04-16 05:38:35.772319 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-16 05:38:35.772337 | orchestrator | 2026-04-16 05:38:35.772349 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-16 05:38:35.772414 | orchestrator | Thursday 16 April 2026 05:38:27 +0000 (0:00:00.257) 0:00:00.257 ******** 2026-04-16 05:38:35.772452 | orchestrator | ok: [testbed-manager] 2026-04-16 05:38:35.772466 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:38:35.772477 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:38:35.772488 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:38:35.772499 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:35.772510 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:38:35.772520 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:38:35.772531 | orchestrator | 2026-04-16 05:38:35.772542 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-16 05:38:35.772554 | orchestrator | Thursday 16 April 2026 05:38:28 +0000 (0:00:01.006) 0:00:01.264 ******** 2026-04-16 05:38:35.772565 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:38:35.772578 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:38:35.772589 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:38:35.772599 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:38:35.772610 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:35.772621 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:38:35.772632 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:38:35.772643 | orchestrator | 2026-04-16 05:38:35.772654 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 05:38:35.772665 | orchestrator | 2026-04-16 05:38:35.772676 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 05:38:35.772687 | orchestrator | Thursday 16 April 2026 05:38:29 +0000 (0:00:01.167) 0:00:02.432 ******** 2026-04-16 05:38:35.772698 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:38:35.772709 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:38:35.772720 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:38:35.772733 | orchestrator | ok: [testbed-manager] 2026-04-16 05:38:35.772745 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:35.772758 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:38:35.772770 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:38:35.772782 | orchestrator | 2026-04-16 05:38:35.772796 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-16 05:38:35.772808 | orchestrator | 2026-04-16 05:38:35.772821 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-16 05:38:35.772833 | orchestrator | Thursday 16 April 2026 05:38:34 +0000 (0:00:05.144) 0:00:07.576 ******** 2026-04-16 05:38:35.772845 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:38:35.772857 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:38:35.772869 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:38:35.772882 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:38:35.772894 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:35.772906 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:38:35.772918 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:38:35.772931 | orchestrator | 2026-04-16 05:38:35.772951 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:38:35.772971 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773038 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773062 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773084 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773104 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773116 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773137 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:38:35.773148 | orchestrator | 2026-04-16 05:38:35.773159 | orchestrator | 2026-04-16 05:38:35.773170 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:38:35.773181 | orchestrator | Thursday 16 April 2026 05:38:35 +0000 (0:00:00.545) 0:00:08.122 ******** 2026-04-16 05:38:35.773192 | orchestrator | =============================================================================== 2026-04-16 05:38:35.773203 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.14s 2026-04-16 05:38:35.773214 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.17s 2026-04-16 05:38:35.773224 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2026-04-16 05:38:35.773235 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-04-16 05:38:38.159103 | orchestrator | 2026-04-16 05:38:38 | INFO  | Task 5501b05b-ec6a-4be0-bd13-d5a736a8e535 (ceph-configure-lvm-volumes) was prepared for execution. 2026-04-16 05:38:38.159198 | orchestrator | 2026-04-16 05:38:38 | INFO  | It takes a moment until task 5501b05b-ec6a-4be0-bd13-d5a736a8e535 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-04-16 05:38:49.437745 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-16 05:38:49.437837 | orchestrator | 2.16.14 2026-04-16 05:38:49.437848 | orchestrator | 2026-04-16 05:38:49.437856 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-16 05:38:49.437863 | orchestrator | 2026-04-16 05:38:49.437870 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 05:38:49.437877 | orchestrator | Thursday 16 April 2026 05:38:42 +0000 (0:00:00.316) 0:00:00.316 ******** 2026-04-16 05:38:49.437884 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 05:38:49.437890 | orchestrator | 2026-04-16 05:38:49.437909 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-16 05:38:49.437916 | orchestrator | Thursday 16 April 2026 05:38:42 +0000 (0:00:00.244) 0:00:00.560 ******** 2026-04-16 05:38:49.437922 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:49.437928 | orchestrator | 2026-04-16 05:38:49.437934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.437940 | orchestrator | Thursday 16 April 2026 05:38:42 +0000 (0:00:00.218) 0:00:00.779 ******** 2026-04-16 05:38:49.437947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-16 05:38:49.437953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-16 05:38:49.437959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-16 05:38:49.437965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-16 05:38:49.437971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-16 05:38:49.437977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-16 05:38:49.437983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-16 05:38:49.437989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-16 05:38:49.437995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-16 05:38:49.438001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-16 05:38:49.438007 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-16 05:38:49.438061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-16 05:38:49.438084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-16 05:38:49.438090 | orchestrator | 2026-04-16 05:38:49.438096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438102 | orchestrator | Thursday 16 April 2026 05:38:43 +0000 (0:00:00.456) 0:00:01.235 ******** 2026-04-16 05:38:49.438109 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438115 | orchestrator | 2026-04-16 05:38:49.438121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438127 | orchestrator | Thursday 16 April 2026 05:38:43 +0000 (0:00:00.188) 0:00:01.424 ******** 2026-04-16 05:38:49.438133 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438140 | orchestrator | 2026-04-16 05:38:49.438146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438152 | orchestrator | Thursday 16 April 2026 05:38:43 +0000 (0:00:00.188) 0:00:01.613 ******** 2026-04-16 05:38:49.438158 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438164 | orchestrator | 2026-04-16 05:38:49.438170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438176 | orchestrator | Thursday 16 April 2026 05:38:44 +0000 (0:00:00.190) 0:00:01.803 ******** 2026-04-16 05:38:49.438182 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438188 | orchestrator | 2026-04-16 05:38:49.438194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438200 | orchestrator | Thursday 16 April 2026 05:38:44 +0000 (0:00:00.181) 0:00:01.985 ******** 2026-04-16 05:38:49.438206 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438212 | orchestrator | 2026-04-16 05:38:49.438218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438225 | orchestrator | Thursday 16 April 2026 05:38:44 +0000 (0:00:00.189) 0:00:02.174 ******** 2026-04-16 05:38:49.438230 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438237 | orchestrator | 2026-04-16 05:38:49.438243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438249 | orchestrator | Thursday 16 April 2026 05:38:44 +0000 (0:00:00.188) 0:00:02.363 ******** 2026-04-16 05:38:49.438255 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438261 | orchestrator | 2026-04-16 05:38:49.438267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438273 | orchestrator | Thursday 16 April 2026 05:38:44 +0000 (0:00:00.191) 0:00:02.554 ******** 2026-04-16 05:38:49.438279 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438285 | orchestrator | 2026-04-16 05:38:49.438291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438298 | orchestrator | Thursday 16 April 2026 05:38:44 +0000 (0:00:00.191) 0:00:02.746 ******** 2026-04-16 05:38:49.438304 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64) 2026-04-16 05:38:49.438311 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64) 2026-04-16 05:38:49.438317 | orchestrator | 2026-04-16 05:38:49.438323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438359 | orchestrator | Thursday 16 April 2026 05:38:45 +0000 (0:00:00.387) 0:00:03.133 ******** 2026-04-16 05:38:49.438366 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d) 2026-04-16 05:38:49.438373 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d) 2026-04-16 05:38:49.438379 | orchestrator | 2026-04-16 05:38:49.438385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438391 | orchestrator | Thursday 16 April 2026 05:38:45 +0000 (0:00:00.583) 0:00:03.716 ******** 2026-04-16 05:38:49.438402 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834) 2026-04-16 05:38:49.438413 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834) 2026-04-16 05:38:49.438420 | orchestrator | 2026-04-16 05:38:49.438426 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438432 | orchestrator | Thursday 16 April 2026 05:38:46 +0000 (0:00:00.595) 0:00:04.312 ******** 2026-04-16 05:38:49.438438 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb) 2026-04-16 05:38:49.438444 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb) 2026-04-16 05:38:49.438450 | orchestrator | 2026-04-16 05:38:49.438456 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:38:49.438462 | orchestrator | Thursday 16 April 2026 05:38:47 +0000 (0:00:00.764) 0:00:05.076 ******** 2026-04-16 05:38:49.438468 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-16 05:38:49.438474 | orchestrator | 2026-04-16 05:38:49.438480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438486 | orchestrator | Thursday 16 April 2026 05:38:47 +0000 (0:00:00.341) 0:00:05.418 ******** 2026-04-16 05:38:49.438493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-16 05:38:49.438499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-16 05:38:49.438505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-16 05:38:49.438511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-16 05:38:49.438517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-16 05:38:49.438523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-16 05:38:49.438529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-16 05:38:49.438535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-16 05:38:49.438541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-16 05:38:49.438547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-16 05:38:49.438553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-16 05:38:49.438558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-16 05:38:49.438564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-16 05:38:49.438570 | orchestrator | 2026-04-16 05:38:49.438577 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438582 | orchestrator | Thursday 16 April 2026 05:38:48 +0000 (0:00:00.397) 0:00:05.816 ******** 2026-04-16 05:38:49.438589 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438595 | orchestrator | 2026-04-16 05:38:49.438601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438607 | orchestrator | Thursday 16 April 2026 05:38:48 +0000 (0:00:00.231) 0:00:06.047 ******** 2026-04-16 05:38:49.438613 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438619 | orchestrator | 2026-04-16 05:38:49.438625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438631 | orchestrator | Thursday 16 April 2026 05:38:48 +0000 (0:00:00.205) 0:00:06.253 ******** 2026-04-16 05:38:49.438637 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438643 | orchestrator | 2026-04-16 05:38:49.438649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438655 | orchestrator | Thursday 16 April 2026 05:38:48 +0000 (0:00:00.206) 0:00:06.459 ******** 2026-04-16 05:38:49.438661 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438671 | orchestrator | 2026-04-16 05:38:49.438678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438684 | orchestrator | Thursday 16 April 2026 05:38:48 +0000 (0:00:00.189) 0:00:06.649 ******** 2026-04-16 05:38:49.438690 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438696 | orchestrator | 2026-04-16 05:38:49.438702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438708 | orchestrator | Thursday 16 April 2026 05:38:49 +0000 (0:00:00.185) 0:00:06.835 ******** 2026-04-16 05:38:49.438714 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438720 | orchestrator | 2026-04-16 05:38:49.438726 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:49.438732 | orchestrator | Thursday 16 April 2026 05:38:49 +0000 (0:00:00.180) 0:00:07.015 ******** 2026-04-16 05:38:49.438738 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:49.438744 | orchestrator | 2026-04-16 05:38:49.438754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:56.702921 | orchestrator | Thursday 16 April 2026 05:38:49 +0000 (0:00:00.195) 0:00:07.211 ******** 2026-04-16 05:38:56.703030 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703048 | orchestrator | 2026-04-16 05:38:56.703062 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:56.703074 | orchestrator | Thursday 16 April 2026 05:38:49 +0000 (0:00:00.189) 0:00:07.401 ******** 2026-04-16 05:38:56.703086 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-16 05:38:56.703098 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-16 05:38:56.703110 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-16 05:38:56.703136 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-16 05:38:56.703161 | orchestrator | 2026-04-16 05:38:56.703173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:56.703185 | orchestrator | Thursday 16 April 2026 05:38:50 +0000 (0:00:00.982) 0:00:08.384 ******** 2026-04-16 05:38:56.703196 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703208 | orchestrator | 2026-04-16 05:38:56.703219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:56.703231 | orchestrator | Thursday 16 April 2026 05:38:50 +0000 (0:00:00.198) 0:00:08.583 ******** 2026-04-16 05:38:56.703242 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703254 | orchestrator | 2026-04-16 05:38:56.703265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:56.703277 | orchestrator | Thursday 16 April 2026 05:38:50 +0000 (0:00:00.191) 0:00:08.774 ******** 2026-04-16 05:38:56.703289 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703300 | orchestrator | 2026-04-16 05:38:56.703312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:38:56.703324 | orchestrator | Thursday 16 April 2026 05:38:51 +0000 (0:00:00.211) 0:00:08.986 ******** 2026-04-16 05:38:56.703358 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703369 | orchestrator | 2026-04-16 05:38:56.703380 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-16 05:38:56.703391 | orchestrator | Thursday 16 April 2026 05:38:51 +0000 (0:00:00.210) 0:00:09.196 ******** 2026-04-16 05:38:56.703402 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-04-16 05:38:56.703413 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-04-16 05:38:56.703424 | orchestrator | 2026-04-16 05:38:56.703435 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-16 05:38:56.703446 | orchestrator | Thursday 16 April 2026 05:38:51 +0000 (0:00:00.167) 0:00:09.363 ******** 2026-04-16 05:38:56.703460 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703473 | orchestrator | 2026-04-16 05:38:56.703485 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-16 05:38:56.703497 | orchestrator | Thursday 16 April 2026 05:38:51 +0000 (0:00:00.124) 0:00:09.488 ******** 2026-04-16 05:38:56.703533 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703546 | orchestrator | 2026-04-16 05:38:56.703560 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-16 05:38:56.703573 | orchestrator | Thursday 16 April 2026 05:38:51 +0000 (0:00:00.135) 0:00:09.623 ******** 2026-04-16 05:38:56.703585 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703597 | orchestrator | 2026-04-16 05:38:56.703608 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-16 05:38:56.703619 | orchestrator | Thursday 16 April 2026 05:38:51 +0000 (0:00:00.131) 0:00:09.755 ******** 2026-04-16 05:38:56.703630 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:56.703641 | orchestrator | 2026-04-16 05:38:56.703652 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-16 05:38:56.703663 | orchestrator | Thursday 16 April 2026 05:38:52 +0000 (0:00:00.128) 0:00:09.883 ******** 2026-04-16 05:38:56.703674 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}}) 2026-04-16 05:38:56.703686 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}}) 2026-04-16 05:38:56.703697 | orchestrator | 2026-04-16 05:38:56.703707 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-16 05:38:56.703718 | orchestrator | Thursday 16 April 2026 05:38:52 +0000 (0:00:00.157) 0:00:10.041 ******** 2026-04-16 05:38:56.703730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}})  2026-04-16 05:38:56.703744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}})  2026-04-16 05:38:56.703754 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703765 | orchestrator | 2026-04-16 05:38:56.703776 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-16 05:38:56.703787 | orchestrator | Thursday 16 April 2026 05:38:52 +0000 (0:00:00.298) 0:00:10.339 ******** 2026-04-16 05:38:56.703798 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}})  2026-04-16 05:38:56.703809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}})  2026-04-16 05:38:56.703819 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703830 | orchestrator | 2026-04-16 05:38:56.703841 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-16 05:38:56.703851 | orchestrator | Thursday 16 April 2026 05:38:52 +0000 (0:00:00.146) 0:00:10.485 ******** 2026-04-16 05:38:56.703862 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}})  2026-04-16 05:38:56.703891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}})  2026-04-16 05:38:56.703903 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.703914 | orchestrator | 2026-04-16 05:38:56.703925 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-16 05:38:56.703936 | orchestrator | Thursday 16 April 2026 05:38:52 +0000 (0:00:00.154) 0:00:10.640 ******** 2026-04-16 05:38:56.703947 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:56.703957 | orchestrator | 2026-04-16 05:38:56.703968 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-16 05:38:56.703984 | orchestrator | Thursday 16 April 2026 05:38:52 +0000 (0:00:00.133) 0:00:10.774 ******** 2026-04-16 05:38:56.703995 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:38:56.704006 | orchestrator | 2026-04-16 05:38:56.704017 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-16 05:38:56.704027 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.137) 0:00:10.912 ******** 2026-04-16 05:38:56.704046 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.704057 | orchestrator | 2026-04-16 05:38:56.704068 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-16 05:38:56.704079 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.131) 0:00:11.044 ******** 2026-04-16 05:38:56.704089 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.704100 | orchestrator | 2026-04-16 05:38:56.704111 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-16 05:38:56.704122 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.126) 0:00:11.170 ******** 2026-04-16 05:38:56.704132 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.704143 | orchestrator | 2026-04-16 05:38:56.704154 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-16 05:38:56.704165 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.148) 0:00:11.318 ******** 2026-04-16 05:38:56.704176 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 05:38:56.704187 | orchestrator |  "ceph_osd_devices": { 2026-04-16 05:38:56.704198 | orchestrator |  "sdb": { 2026-04-16 05:38:56.704208 | orchestrator |  "osd_lvm_uuid": "c8cebb68-f409-516c-8b4d-2b5a47d5dab9" 2026-04-16 05:38:56.704219 | orchestrator |  }, 2026-04-16 05:38:56.704230 | orchestrator |  "sdc": { 2026-04-16 05:38:56.704241 | orchestrator |  "osd_lvm_uuid": "5d85d6a1-6c0d-5a96-8279-fc702a5664ab" 2026-04-16 05:38:56.704252 | orchestrator |  } 2026-04-16 05:38:56.704262 | orchestrator |  } 2026-04-16 05:38:56.704273 | orchestrator | } 2026-04-16 05:38:56.704285 | orchestrator | 2026-04-16 05:38:56.704296 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-16 05:38:56.704307 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.145) 0:00:11.464 ******** 2026-04-16 05:38:56.704317 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.704358 | orchestrator | 2026-04-16 05:38:56.704370 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-16 05:38:56.704381 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.129) 0:00:11.594 ******** 2026-04-16 05:38:56.704392 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.704403 | orchestrator | 2026-04-16 05:38:56.704414 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-16 05:38:56.704424 | orchestrator | Thursday 16 April 2026 05:38:53 +0000 (0:00:00.128) 0:00:11.722 ******** 2026-04-16 05:38:56.704435 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:38:56.704446 | orchestrator | 2026-04-16 05:38:56.704457 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-16 05:38:56.704468 | orchestrator | Thursday 16 April 2026 05:38:54 +0000 (0:00:00.144) 0:00:11.867 ******** 2026-04-16 05:38:56.704479 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 05:38:56.704489 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-16 05:38:56.704500 | orchestrator |  "ceph_osd_devices": { 2026-04-16 05:38:56.704511 | orchestrator |  "sdb": { 2026-04-16 05:38:56.704522 | orchestrator |  "osd_lvm_uuid": "c8cebb68-f409-516c-8b4d-2b5a47d5dab9" 2026-04-16 05:38:56.704533 | orchestrator |  }, 2026-04-16 05:38:56.704544 | orchestrator |  "sdc": { 2026-04-16 05:38:56.704555 | orchestrator |  "osd_lvm_uuid": "5d85d6a1-6c0d-5a96-8279-fc702a5664ab" 2026-04-16 05:38:56.704565 | orchestrator |  } 2026-04-16 05:38:56.704576 | orchestrator |  }, 2026-04-16 05:38:56.704587 | orchestrator |  "lvm_volumes": [ 2026-04-16 05:38:56.704598 | orchestrator |  { 2026-04-16 05:38:56.704609 | orchestrator |  "data": "osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9", 2026-04-16 05:38:56.704620 | orchestrator |  "data_vg": "ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9" 2026-04-16 05:38:56.704630 | orchestrator |  }, 2026-04-16 05:38:56.704641 | orchestrator |  { 2026-04-16 05:38:56.704652 | orchestrator |  "data": "osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab", 2026-04-16 05:38:56.704670 | orchestrator |  "data_vg": "ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab" 2026-04-16 05:38:56.704681 | orchestrator |  } 2026-04-16 05:38:56.704692 | orchestrator |  ] 2026-04-16 05:38:56.704703 | orchestrator |  } 2026-04-16 05:38:56.704713 | orchestrator | } 2026-04-16 05:38:56.704724 | orchestrator | 2026-04-16 05:38:56.704735 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-16 05:38:56.704746 | orchestrator | Thursday 16 April 2026 05:38:54 +0000 (0:00:00.378) 0:00:12.246 ******** 2026-04-16 05:38:56.704757 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 05:38:56.704767 | orchestrator | 2026-04-16 05:38:56.704778 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-16 05:38:56.704789 | orchestrator | 2026-04-16 05:38:56.704800 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 05:38:56.704811 | orchestrator | Thursday 16 April 2026 05:38:56 +0000 (0:00:01.748) 0:00:13.994 ******** 2026-04-16 05:38:56.704821 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-16 05:38:56.704835 | orchestrator | 2026-04-16 05:38:56.704853 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-16 05:38:56.704871 | orchestrator | Thursday 16 April 2026 05:38:56 +0000 (0:00:00.249) 0:00:14.244 ******** 2026-04-16 05:38:56.704890 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:38:56.704907 | orchestrator | 2026-04-16 05:38:56.704934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.302596 | orchestrator | Thursday 16 April 2026 05:38:56 +0000 (0:00:00.239) 0:00:14.483 ******** 2026-04-16 05:39:05.302705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-16 05:39:05.302721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-16 05:39:05.302733 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-16 05:39:05.302760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-16 05:39:05.302772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-16 05:39:05.302783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-16 05:39:05.302794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-16 05:39:05.302805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-16 05:39:05.302816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-16 05:39:05.302827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-16 05:39:05.302838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-16 05:39:05.302861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-16 05:39:05.302873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-16 05:39:05.302884 | orchestrator | 2026-04-16 05:39:05.302895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.302906 | orchestrator | Thursday 16 April 2026 05:38:57 +0000 (0:00:00.362) 0:00:14.846 ******** 2026-04-16 05:39:05.302917 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.302929 | orchestrator | 2026-04-16 05:39:05.302940 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.302951 | orchestrator | Thursday 16 April 2026 05:38:57 +0000 (0:00:00.194) 0:00:15.041 ******** 2026-04-16 05:39:05.302962 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.302973 | orchestrator | 2026-04-16 05:39:05.302984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.302995 | orchestrator | Thursday 16 April 2026 05:38:57 +0000 (0:00:00.193) 0:00:15.234 ******** 2026-04-16 05:39:05.303027 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303039 | orchestrator | 2026-04-16 05:39:05.303050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303060 | orchestrator | Thursday 16 April 2026 05:38:57 +0000 (0:00:00.207) 0:00:15.442 ******** 2026-04-16 05:39:05.303071 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303082 | orchestrator | 2026-04-16 05:39:05.303092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303103 | orchestrator | Thursday 16 April 2026 05:38:58 +0000 (0:00:00.595) 0:00:16.037 ******** 2026-04-16 05:39:05.303114 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303125 | orchestrator | 2026-04-16 05:39:05.303146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303158 | orchestrator | Thursday 16 April 2026 05:38:58 +0000 (0:00:00.206) 0:00:16.243 ******** 2026-04-16 05:39:05.303168 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303179 | orchestrator | 2026-04-16 05:39:05.303189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303200 | orchestrator | Thursday 16 April 2026 05:38:58 +0000 (0:00:00.201) 0:00:16.444 ******** 2026-04-16 05:39:05.303211 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303221 | orchestrator | 2026-04-16 05:39:05.303232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303243 | orchestrator | Thursday 16 April 2026 05:38:58 +0000 (0:00:00.191) 0:00:16.636 ******** 2026-04-16 05:39:05.303254 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303264 | orchestrator | 2026-04-16 05:39:05.303275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303286 | orchestrator | Thursday 16 April 2026 05:38:59 +0000 (0:00:00.195) 0:00:16.831 ******** 2026-04-16 05:39:05.303297 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8) 2026-04-16 05:39:05.303308 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8) 2026-04-16 05:39:05.303341 | orchestrator | 2026-04-16 05:39:05.303353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303364 | orchestrator | Thursday 16 April 2026 05:38:59 +0000 (0:00:00.418) 0:00:17.250 ******** 2026-04-16 05:39:05.303374 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13) 2026-04-16 05:39:05.303385 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13) 2026-04-16 05:39:05.303396 | orchestrator | 2026-04-16 05:39:05.303406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303417 | orchestrator | Thursday 16 April 2026 05:38:59 +0000 (0:00:00.407) 0:00:17.657 ******** 2026-04-16 05:39:05.303428 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3) 2026-04-16 05:39:05.303438 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3) 2026-04-16 05:39:05.303449 | orchestrator | 2026-04-16 05:39:05.303460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303488 | orchestrator | Thursday 16 April 2026 05:39:00 +0000 (0:00:00.401) 0:00:18.058 ******** 2026-04-16 05:39:05.303500 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99) 2026-04-16 05:39:05.303511 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99) 2026-04-16 05:39:05.303522 | orchestrator | 2026-04-16 05:39:05.303533 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:05.303549 | orchestrator | Thursday 16 April 2026 05:39:00 +0000 (0:00:00.652) 0:00:18.712 ******** 2026-04-16 05:39:05.303560 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-16 05:39:05.303579 | orchestrator | 2026-04-16 05:39:05.303590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303601 | orchestrator | Thursday 16 April 2026 05:39:01 +0000 (0:00:00.514) 0:00:19.226 ******** 2026-04-16 05:39:05.303611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-16 05:39:05.303622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-16 05:39:05.303633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-16 05:39:05.303643 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-16 05:39:05.303654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-16 05:39:05.303664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-16 05:39:05.303675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-16 05:39:05.303686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-16 05:39:05.303696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-16 05:39:05.303707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-16 05:39:05.303718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-16 05:39:05.303729 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-16 05:39:05.303740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-16 05:39:05.303751 | orchestrator | 2026-04-16 05:39:05.303761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303772 | orchestrator | Thursday 16 April 2026 05:39:02 +0000 (0:00:00.757) 0:00:19.983 ******** 2026-04-16 05:39:05.303783 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303794 | orchestrator | 2026-04-16 05:39:05.303804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303815 | orchestrator | Thursday 16 April 2026 05:39:02 +0000 (0:00:00.194) 0:00:20.178 ******** 2026-04-16 05:39:05.303826 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303837 | orchestrator | 2026-04-16 05:39:05.303847 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303858 | orchestrator | Thursday 16 April 2026 05:39:02 +0000 (0:00:00.196) 0:00:20.375 ******** 2026-04-16 05:39:05.303869 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303880 | orchestrator | 2026-04-16 05:39:05.303890 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303901 | orchestrator | Thursday 16 April 2026 05:39:02 +0000 (0:00:00.203) 0:00:20.579 ******** 2026-04-16 05:39:05.303912 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303923 | orchestrator | 2026-04-16 05:39:05.303933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303944 | orchestrator | Thursday 16 April 2026 05:39:03 +0000 (0:00:00.223) 0:00:20.802 ******** 2026-04-16 05:39:05.303955 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.303966 | orchestrator | 2026-04-16 05:39:05.303977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.303988 | orchestrator | Thursday 16 April 2026 05:39:03 +0000 (0:00:00.207) 0:00:21.010 ******** 2026-04-16 05:39:05.303998 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.304009 | orchestrator | 2026-04-16 05:39:05.304020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.304031 | orchestrator | Thursday 16 April 2026 05:39:03 +0000 (0:00:00.206) 0:00:21.217 ******** 2026-04-16 05:39:05.304042 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.304059 | orchestrator | 2026-04-16 05:39:05.304070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.304081 | orchestrator | Thursday 16 April 2026 05:39:03 +0000 (0:00:00.214) 0:00:21.432 ******** 2026-04-16 05:39:05.304092 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:05.304102 | orchestrator | 2026-04-16 05:39:05.304113 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.304124 | orchestrator | Thursday 16 April 2026 05:39:03 +0000 (0:00:00.199) 0:00:21.631 ******** 2026-04-16 05:39:05.304134 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-16 05:39:05.304146 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-16 05:39:05.304157 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-16 05:39:05.304167 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-16 05:39:05.304178 | orchestrator | 2026-04-16 05:39:05.304189 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:05.304200 | orchestrator | Thursday 16 April 2026 05:39:04 +0000 (0:00:00.832) 0:00:22.464 ******** 2026-04-16 05:39:05.304211 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.176761 | orchestrator | 2026-04-16 05:39:11.176860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:11.176876 | orchestrator | Thursday 16 April 2026 05:39:05 +0000 (0:00:00.620) 0:00:23.084 ******** 2026-04-16 05:39:11.176883 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.176891 | orchestrator | 2026-04-16 05:39:11.176898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:11.176906 | orchestrator | Thursday 16 April 2026 05:39:05 +0000 (0:00:00.208) 0:00:23.293 ******** 2026-04-16 05:39:11.176928 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.176936 | orchestrator | 2026-04-16 05:39:11.176942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:11.176949 | orchestrator | Thursday 16 April 2026 05:39:05 +0000 (0:00:00.210) 0:00:23.503 ******** 2026-04-16 05:39:11.176955 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.176962 | orchestrator | 2026-04-16 05:39:11.176969 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-16 05:39:11.176976 | orchestrator | Thursday 16 April 2026 05:39:05 +0000 (0:00:00.209) 0:00:23.712 ******** 2026-04-16 05:39:11.176983 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-04-16 05:39:11.176990 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-04-16 05:39:11.176997 | orchestrator | 2026-04-16 05:39:11.177004 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-16 05:39:11.177012 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.162) 0:00:23.875 ******** 2026-04-16 05:39:11.177020 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177027 | orchestrator | 2026-04-16 05:39:11.177034 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-16 05:39:11.177042 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.134) 0:00:24.009 ******** 2026-04-16 05:39:11.177049 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177057 | orchestrator | 2026-04-16 05:39:11.177065 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-16 05:39:11.177071 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.137) 0:00:24.147 ******** 2026-04-16 05:39:11.177076 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177080 | orchestrator | 2026-04-16 05:39:11.177084 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-16 05:39:11.177089 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.140) 0:00:24.288 ******** 2026-04-16 05:39:11.177093 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:39:11.177098 | orchestrator | 2026-04-16 05:39:11.177103 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-16 05:39:11.177107 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.139) 0:00:24.427 ******** 2026-04-16 05:39:11.177125 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}}) 2026-04-16 05:39:11.177130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '280a11fd-e83f-54f4-b253-754709c5cdf6'}}) 2026-04-16 05:39:11.177135 | orchestrator | 2026-04-16 05:39:11.177140 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-16 05:39:11.177144 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.178) 0:00:24.605 ******** 2026-04-16 05:39:11.177149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}})  2026-04-16 05:39:11.177155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '280a11fd-e83f-54f4-b253-754709c5cdf6'}})  2026-04-16 05:39:11.177160 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177164 | orchestrator | 2026-04-16 05:39:11.177168 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-16 05:39:11.177172 | orchestrator | Thursday 16 April 2026 05:39:06 +0000 (0:00:00.140) 0:00:24.746 ******** 2026-04-16 05:39:11.177177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}})  2026-04-16 05:39:11.177181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '280a11fd-e83f-54f4-b253-754709c5cdf6'}})  2026-04-16 05:39:11.177185 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177190 | orchestrator | 2026-04-16 05:39:11.177194 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-16 05:39:11.177198 | orchestrator | Thursday 16 April 2026 05:39:07 +0000 (0:00:00.327) 0:00:25.073 ******** 2026-04-16 05:39:11.177203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}})  2026-04-16 05:39:11.177207 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '280a11fd-e83f-54f4-b253-754709c5cdf6'}})  2026-04-16 05:39:11.177211 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177216 | orchestrator | 2026-04-16 05:39:11.177220 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-16 05:39:11.177224 | orchestrator | Thursday 16 April 2026 05:39:07 +0000 (0:00:00.148) 0:00:25.222 ******** 2026-04-16 05:39:11.177228 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:39:11.177233 | orchestrator | 2026-04-16 05:39:11.177237 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-16 05:39:11.177241 | orchestrator | Thursday 16 April 2026 05:39:07 +0000 (0:00:00.143) 0:00:25.366 ******** 2026-04-16 05:39:11.177245 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:39:11.177249 | orchestrator | 2026-04-16 05:39:11.177254 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-16 05:39:11.177258 | orchestrator | Thursday 16 April 2026 05:39:07 +0000 (0:00:00.141) 0:00:25.507 ******** 2026-04-16 05:39:11.177276 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177281 | orchestrator | 2026-04-16 05:39:11.177285 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-16 05:39:11.177289 | orchestrator | Thursday 16 April 2026 05:39:07 +0000 (0:00:00.130) 0:00:25.637 ******** 2026-04-16 05:39:11.177293 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177298 | orchestrator | 2026-04-16 05:39:11.177302 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-16 05:39:11.177307 | orchestrator | Thursday 16 April 2026 05:39:07 +0000 (0:00:00.148) 0:00:25.786 ******** 2026-04-16 05:39:11.177337 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177346 | orchestrator | 2026-04-16 05:39:11.177353 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-16 05:39:11.177359 | orchestrator | Thursday 16 April 2026 05:39:08 +0000 (0:00:00.144) 0:00:25.930 ******** 2026-04-16 05:39:11.177371 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 05:39:11.177379 | orchestrator |  "ceph_osd_devices": { 2026-04-16 05:39:11.177385 | orchestrator |  "sdb": { 2026-04-16 05:39:11.177393 | orchestrator |  "osd_lvm_uuid": "7b8b78e2-2212-5c47-abe3-ec23a1e6354f" 2026-04-16 05:39:11.177400 | orchestrator |  }, 2026-04-16 05:39:11.177407 | orchestrator |  "sdc": { 2026-04-16 05:39:11.177414 | orchestrator |  "osd_lvm_uuid": "280a11fd-e83f-54f4-b253-754709c5cdf6" 2026-04-16 05:39:11.177420 | orchestrator |  } 2026-04-16 05:39:11.177427 | orchestrator |  } 2026-04-16 05:39:11.177434 | orchestrator | } 2026-04-16 05:39:11.177441 | orchestrator | 2026-04-16 05:39:11.177448 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-16 05:39:11.177454 | orchestrator | Thursday 16 April 2026 05:39:08 +0000 (0:00:00.157) 0:00:26.088 ******** 2026-04-16 05:39:11.177461 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177468 | orchestrator | 2026-04-16 05:39:11.177475 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-16 05:39:11.177481 | orchestrator | Thursday 16 April 2026 05:39:08 +0000 (0:00:00.126) 0:00:26.214 ******** 2026-04-16 05:39:11.177487 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177495 | orchestrator | 2026-04-16 05:39:11.177501 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-16 05:39:11.177508 | orchestrator | Thursday 16 April 2026 05:39:08 +0000 (0:00:00.139) 0:00:26.354 ******** 2026-04-16 05:39:11.177516 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:39:11.177522 | orchestrator | 2026-04-16 05:39:11.177530 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-16 05:39:11.177537 | orchestrator | Thursday 16 April 2026 05:39:08 +0000 (0:00:00.142) 0:00:26.496 ******** 2026-04-16 05:39:11.177544 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 05:39:11.177551 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-16 05:39:11.177557 | orchestrator |  "ceph_osd_devices": { 2026-04-16 05:39:11.177564 | orchestrator |  "sdb": { 2026-04-16 05:39:11.177571 | orchestrator |  "osd_lvm_uuid": "7b8b78e2-2212-5c47-abe3-ec23a1e6354f" 2026-04-16 05:39:11.177577 | orchestrator |  }, 2026-04-16 05:39:11.177584 | orchestrator |  "sdc": { 2026-04-16 05:39:11.177591 | orchestrator |  "osd_lvm_uuid": "280a11fd-e83f-54f4-b253-754709c5cdf6" 2026-04-16 05:39:11.177597 | orchestrator |  } 2026-04-16 05:39:11.177604 | orchestrator |  }, 2026-04-16 05:39:11.177611 | orchestrator |  "lvm_volumes": [ 2026-04-16 05:39:11.177618 | orchestrator |  { 2026-04-16 05:39:11.177625 | orchestrator |  "data": "osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f", 2026-04-16 05:39:11.177632 | orchestrator |  "data_vg": "ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f" 2026-04-16 05:39:11.177639 | orchestrator |  }, 2026-04-16 05:39:11.177647 | orchestrator |  { 2026-04-16 05:39:11.177654 | orchestrator |  "data": "osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6", 2026-04-16 05:39:11.177660 | orchestrator |  "data_vg": "ceph-280a11fd-e83f-54f4-b253-754709c5cdf6" 2026-04-16 05:39:11.177668 | orchestrator |  } 2026-04-16 05:39:11.177676 | orchestrator |  ] 2026-04-16 05:39:11.177683 | orchestrator |  } 2026-04-16 05:39:11.177691 | orchestrator | } 2026-04-16 05:39:11.177698 | orchestrator | 2026-04-16 05:39:11.177706 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-16 05:39:11.177711 | orchestrator | Thursday 16 April 2026 05:39:09 +0000 (0:00:00.462) 0:00:26.958 ******** 2026-04-16 05:39:11.177716 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-16 05:39:11.177722 | orchestrator | 2026-04-16 05:39:11.177726 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-04-16 05:39:11.177732 | orchestrator | 2026-04-16 05:39:11.177736 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 05:39:11.177741 | orchestrator | Thursday 16 April 2026 05:39:10 +0000 (0:00:01.109) 0:00:28.067 ******** 2026-04-16 05:39:11.177752 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-16 05:39:11.177756 | orchestrator | 2026-04-16 05:39:11.177760 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-16 05:39:11.177765 | orchestrator | Thursday 16 April 2026 05:39:10 +0000 (0:00:00.272) 0:00:28.340 ******** 2026-04-16 05:39:11.177769 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:39:11.177774 | orchestrator | 2026-04-16 05:39:11.177778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:11.177782 | orchestrator | Thursday 16 April 2026 05:39:10 +0000 (0:00:00.235) 0:00:28.576 ******** 2026-04-16 05:39:11.177787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-16 05:39:11.177791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-16 05:39:11.177796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-16 05:39:11.177800 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-16 05:39:11.177804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-16 05:39:11.177817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-16 05:39:19.378525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-16 05:39:19.378673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-16 05:39:19.378702 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-16 05:39:19.378716 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-16 05:39:19.378759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-16 05:39:19.378771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-16 05:39:19.378782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-16 05:39:19.378793 | orchestrator | 2026-04-16 05:39:19.378805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.378817 | orchestrator | Thursday 16 April 2026 05:39:11 +0000 (0:00:00.378) 0:00:28.954 ******** 2026-04-16 05:39:19.378828 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.378840 | orchestrator | 2026-04-16 05:39:19.378852 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.378862 | orchestrator | Thursday 16 April 2026 05:39:11 +0000 (0:00:00.199) 0:00:29.154 ******** 2026-04-16 05:39:19.378873 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.378905 | orchestrator | 2026-04-16 05:39:19.378917 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.378930 | orchestrator | Thursday 16 April 2026 05:39:11 +0000 (0:00:00.195) 0:00:29.349 ******** 2026-04-16 05:39:19.378943 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.378955 | orchestrator | 2026-04-16 05:39:19.378969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.378982 | orchestrator | Thursday 16 April 2026 05:39:11 +0000 (0:00:00.191) 0:00:29.541 ******** 2026-04-16 05:39:19.378994 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379007 | orchestrator | 2026-04-16 05:39:19.379019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379032 | orchestrator | Thursday 16 April 2026 05:39:12 +0000 (0:00:00.607) 0:00:30.149 ******** 2026-04-16 05:39:19.379045 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379057 | orchestrator | 2026-04-16 05:39:19.379070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379082 | orchestrator | Thursday 16 April 2026 05:39:12 +0000 (0:00:00.208) 0:00:30.357 ******** 2026-04-16 05:39:19.379115 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379128 | orchestrator | 2026-04-16 05:39:19.379141 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379154 | orchestrator | Thursday 16 April 2026 05:39:12 +0000 (0:00:00.219) 0:00:30.576 ******** 2026-04-16 05:39:19.379166 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379179 | orchestrator | 2026-04-16 05:39:19.379191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379203 | orchestrator | Thursday 16 April 2026 05:39:13 +0000 (0:00:00.218) 0:00:30.795 ******** 2026-04-16 05:39:19.379215 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379226 | orchestrator | 2026-04-16 05:39:19.379236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379247 | orchestrator | Thursday 16 April 2026 05:39:13 +0000 (0:00:00.204) 0:00:31.000 ******** 2026-04-16 05:39:19.379258 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd) 2026-04-16 05:39:19.379270 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd) 2026-04-16 05:39:19.379281 | orchestrator | 2026-04-16 05:39:19.379291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379327 | orchestrator | Thursday 16 April 2026 05:39:13 +0000 (0:00:00.404) 0:00:31.405 ******** 2026-04-16 05:39:19.379339 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e) 2026-04-16 05:39:19.379350 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e) 2026-04-16 05:39:19.379361 | orchestrator | 2026-04-16 05:39:19.379371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379382 | orchestrator | Thursday 16 April 2026 05:39:14 +0000 (0:00:00.388) 0:00:31.793 ******** 2026-04-16 05:39:19.379392 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042) 2026-04-16 05:39:19.379403 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042) 2026-04-16 05:39:19.379414 | orchestrator | 2026-04-16 05:39:19.379424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379435 | orchestrator | Thursday 16 April 2026 05:39:14 +0000 (0:00:00.415) 0:00:32.209 ******** 2026-04-16 05:39:19.379447 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3) 2026-04-16 05:39:19.379457 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3) 2026-04-16 05:39:19.379468 | orchestrator | 2026-04-16 05:39:19.379479 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:39:19.379489 | orchestrator | Thursday 16 April 2026 05:39:14 +0000 (0:00:00.408) 0:00:32.618 ******** 2026-04-16 05:39:19.379500 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-16 05:39:19.379511 | orchestrator | 2026-04-16 05:39:19.379522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379556 | orchestrator | Thursday 16 April 2026 05:39:15 +0000 (0:00:00.313) 0:00:32.932 ******** 2026-04-16 05:39:19.379568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-16 05:39:19.379578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-16 05:39:19.379589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-16 05:39:19.379606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-16 05:39:19.379617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-16 05:39:19.379628 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-16 05:39:19.379647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-16 05:39:19.379657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-16 05:39:19.379668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-16 05:39:19.379678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-16 05:39:19.379689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-16 05:39:19.379699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-16 05:39:19.379710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-16 05:39:19.379721 | orchestrator | 2026-04-16 05:39:19.379731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379742 | orchestrator | Thursday 16 April 2026 05:39:15 +0000 (0:00:00.535) 0:00:33.467 ******** 2026-04-16 05:39:19.379753 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379764 | orchestrator | 2026-04-16 05:39:19.379774 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379785 | orchestrator | Thursday 16 April 2026 05:39:15 +0000 (0:00:00.190) 0:00:33.658 ******** 2026-04-16 05:39:19.379796 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379807 | orchestrator | 2026-04-16 05:39:19.379817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379828 | orchestrator | Thursday 16 April 2026 05:39:16 +0000 (0:00:00.216) 0:00:33.874 ******** 2026-04-16 05:39:19.379839 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379849 | orchestrator | 2026-04-16 05:39:19.379860 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379871 | orchestrator | Thursday 16 April 2026 05:39:16 +0000 (0:00:00.191) 0:00:34.066 ******** 2026-04-16 05:39:19.379881 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379892 | orchestrator | 2026-04-16 05:39:19.379903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379914 | orchestrator | Thursday 16 April 2026 05:39:16 +0000 (0:00:00.197) 0:00:34.263 ******** 2026-04-16 05:39:19.379924 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379935 | orchestrator | 2026-04-16 05:39:19.379946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379956 | orchestrator | Thursday 16 April 2026 05:39:16 +0000 (0:00:00.193) 0:00:34.456 ******** 2026-04-16 05:39:19.379967 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.379978 | orchestrator | 2026-04-16 05:39:19.379988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.379999 | orchestrator | Thursday 16 April 2026 05:39:16 +0000 (0:00:00.200) 0:00:34.657 ******** 2026-04-16 05:39:19.380010 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.380021 | orchestrator | 2026-04-16 05:39:19.380031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.380042 | orchestrator | Thursday 16 April 2026 05:39:17 +0000 (0:00:00.198) 0:00:34.855 ******** 2026-04-16 05:39:19.380053 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.380064 | orchestrator | 2026-04-16 05:39:19.380074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.380085 | orchestrator | Thursday 16 April 2026 05:39:17 +0000 (0:00:00.193) 0:00:35.049 ******** 2026-04-16 05:39:19.380096 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-16 05:39:19.380106 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-16 05:39:19.380118 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-16 05:39:19.380128 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-16 05:39:19.380142 | orchestrator | 2026-04-16 05:39:19.380160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.380187 | orchestrator | Thursday 16 April 2026 05:39:18 +0000 (0:00:00.818) 0:00:35.868 ******** 2026-04-16 05:39:19.380206 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.380224 | orchestrator | 2026-04-16 05:39:19.380242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.380257 | orchestrator | Thursday 16 April 2026 05:39:18 +0000 (0:00:00.205) 0:00:36.074 ******** 2026-04-16 05:39:19.380268 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.380278 | orchestrator | 2026-04-16 05:39:19.380289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.380324 | orchestrator | Thursday 16 April 2026 05:39:18 +0000 (0:00:00.218) 0:00:36.292 ******** 2026-04-16 05:39:19.380345 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.380357 | orchestrator | 2026-04-16 05:39:19.380368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:39:19.380378 | orchestrator | Thursday 16 April 2026 05:39:19 +0000 (0:00:00.657) 0:00:36.950 ******** 2026-04-16 05:39:19.380389 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:19.380400 | orchestrator | 2026-04-16 05:39:19.380419 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-04-16 05:39:23.348478 | orchestrator | Thursday 16 April 2026 05:39:19 +0000 (0:00:00.208) 0:00:37.159 ******** 2026-04-16 05:39:23.348600 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-04-16 05:39:23.348633 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-04-16 05:39:23.348653 | orchestrator | 2026-04-16 05:39:23.348673 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-04-16 05:39:23.348713 | orchestrator | Thursday 16 April 2026 05:39:19 +0000 (0:00:00.178) 0:00:37.337 ******** 2026-04-16 05:39:23.348734 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.348753 | orchestrator | 2026-04-16 05:39:23.348774 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-04-16 05:39:23.348793 | orchestrator | Thursday 16 April 2026 05:39:19 +0000 (0:00:00.143) 0:00:37.481 ******** 2026-04-16 05:39:23.348812 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.348824 | orchestrator | 2026-04-16 05:39:23.348835 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-04-16 05:39:23.348846 | orchestrator | Thursday 16 April 2026 05:39:19 +0000 (0:00:00.123) 0:00:37.604 ******** 2026-04-16 05:39:23.348857 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.348867 | orchestrator | 2026-04-16 05:39:23.348878 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-04-16 05:39:23.348889 | orchestrator | Thursday 16 April 2026 05:39:19 +0000 (0:00:00.138) 0:00:37.742 ******** 2026-04-16 05:39:23.348900 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:39:23.348912 | orchestrator | 2026-04-16 05:39:23.348923 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-04-16 05:39:23.348933 | orchestrator | Thursday 16 April 2026 05:39:20 +0000 (0:00:00.140) 0:00:37.883 ******** 2026-04-16 05:39:23.348945 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d9f1eac-7172-5024-9561-d385c629a6f5'}}) 2026-04-16 05:39:23.348957 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44db58af-23ca-547e-81cd-90c78ecf63d9'}}) 2026-04-16 05:39:23.348968 | orchestrator | 2026-04-16 05:39:23.348978 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-04-16 05:39:23.348989 | orchestrator | Thursday 16 April 2026 05:39:20 +0000 (0:00:00.163) 0:00:38.047 ******** 2026-04-16 05:39:23.349001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d9f1eac-7172-5024-9561-d385c629a6f5'}})  2026-04-16 05:39:23.349014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44db58af-23ca-547e-81cd-90c78ecf63d9'}})  2026-04-16 05:39:23.349025 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349067 | orchestrator | 2026-04-16 05:39:23.349087 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-04-16 05:39:23.349103 | orchestrator | Thursday 16 April 2026 05:39:20 +0000 (0:00:00.159) 0:00:38.207 ******** 2026-04-16 05:39:23.349121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d9f1eac-7172-5024-9561-d385c629a6f5'}})  2026-04-16 05:39:23.349138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44db58af-23ca-547e-81cd-90c78ecf63d9'}})  2026-04-16 05:39:23.349157 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349175 | orchestrator | 2026-04-16 05:39:23.349192 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-04-16 05:39:23.349211 | orchestrator | Thursday 16 April 2026 05:39:20 +0000 (0:00:00.155) 0:00:38.362 ******** 2026-04-16 05:39:23.349230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d9f1eac-7172-5024-9561-d385c629a6f5'}})  2026-04-16 05:39:23.349248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44db58af-23ca-547e-81cd-90c78ecf63d9'}})  2026-04-16 05:39:23.349263 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349275 | orchestrator | 2026-04-16 05:39:23.349286 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-04-16 05:39:23.349326 | orchestrator | Thursday 16 April 2026 05:39:20 +0000 (0:00:00.145) 0:00:38.508 ******** 2026-04-16 05:39:23.349339 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:39:23.349350 | orchestrator | 2026-04-16 05:39:23.349360 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-04-16 05:39:23.349371 | orchestrator | Thursday 16 April 2026 05:39:20 +0000 (0:00:00.119) 0:00:38.627 ******** 2026-04-16 05:39:23.349382 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:39:23.349393 | orchestrator | 2026-04-16 05:39:23.349403 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-04-16 05:39:23.349414 | orchestrator | Thursday 16 April 2026 05:39:21 +0000 (0:00:00.330) 0:00:38.958 ******** 2026-04-16 05:39:23.349425 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349436 | orchestrator | 2026-04-16 05:39:23.349446 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-04-16 05:39:23.349457 | orchestrator | Thursday 16 April 2026 05:39:21 +0000 (0:00:00.133) 0:00:39.091 ******** 2026-04-16 05:39:23.349469 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349479 | orchestrator | 2026-04-16 05:39:23.349490 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-04-16 05:39:23.349501 | orchestrator | Thursday 16 April 2026 05:39:21 +0000 (0:00:00.136) 0:00:39.228 ******** 2026-04-16 05:39:23.349512 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349522 | orchestrator | 2026-04-16 05:39:23.349533 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-04-16 05:39:23.349544 | orchestrator | Thursday 16 April 2026 05:39:21 +0000 (0:00:00.129) 0:00:39.357 ******** 2026-04-16 05:39:23.349555 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 05:39:23.349566 | orchestrator |  "ceph_osd_devices": { 2026-04-16 05:39:23.349577 | orchestrator |  "sdb": { 2026-04-16 05:39:23.349609 | orchestrator |  "osd_lvm_uuid": "4d9f1eac-7172-5024-9561-d385c629a6f5" 2026-04-16 05:39:23.349621 | orchestrator |  }, 2026-04-16 05:39:23.349632 | orchestrator |  "sdc": { 2026-04-16 05:39:23.349642 | orchestrator |  "osd_lvm_uuid": "44db58af-23ca-547e-81cd-90c78ecf63d9" 2026-04-16 05:39:23.349653 | orchestrator |  } 2026-04-16 05:39:23.349664 | orchestrator |  } 2026-04-16 05:39:23.349675 | orchestrator | } 2026-04-16 05:39:23.349686 | orchestrator | 2026-04-16 05:39:23.349697 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-04-16 05:39:23.349715 | orchestrator | Thursday 16 April 2026 05:39:21 +0000 (0:00:00.153) 0:00:39.511 ******** 2026-04-16 05:39:23.349727 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349747 | orchestrator | 2026-04-16 05:39:23.349759 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-04-16 05:39:23.349769 | orchestrator | Thursday 16 April 2026 05:39:21 +0000 (0:00:00.144) 0:00:39.656 ******** 2026-04-16 05:39:23.349780 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349791 | orchestrator | 2026-04-16 05:39:23.349802 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-04-16 05:39:23.349813 | orchestrator | Thursday 16 April 2026 05:39:22 +0000 (0:00:00.141) 0:00:39.798 ******** 2026-04-16 05:39:23.349823 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:39:23.349834 | orchestrator | 2026-04-16 05:39:23.349845 | orchestrator | TASK [Print configuration data] ************************************************ 2026-04-16 05:39:23.349856 | orchestrator | Thursday 16 April 2026 05:39:22 +0000 (0:00:00.138) 0:00:39.936 ******** 2026-04-16 05:39:23.349867 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 05:39:23.349878 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-04-16 05:39:23.349889 | orchestrator |  "ceph_osd_devices": { 2026-04-16 05:39:23.349903 | orchestrator |  "sdb": { 2026-04-16 05:39:23.349921 | orchestrator |  "osd_lvm_uuid": "4d9f1eac-7172-5024-9561-d385c629a6f5" 2026-04-16 05:39:23.349937 | orchestrator |  }, 2026-04-16 05:39:23.349953 | orchestrator |  "sdc": { 2026-04-16 05:39:23.349970 | orchestrator |  "osd_lvm_uuid": "44db58af-23ca-547e-81cd-90c78ecf63d9" 2026-04-16 05:39:23.349986 | orchestrator |  } 2026-04-16 05:39:23.350003 | orchestrator |  }, 2026-04-16 05:39:23.350087 | orchestrator |  "lvm_volumes": [ 2026-04-16 05:39:23.350112 | orchestrator |  { 2026-04-16 05:39:23.350131 | orchestrator |  "data": "osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5", 2026-04-16 05:39:23.350149 | orchestrator |  "data_vg": "ceph-4d9f1eac-7172-5024-9561-d385c629a6f5" 2026-04-16 05:39:23.350167 | orchestrator |  }, 2026-04-16 05:39:23.350185 | orchestrator |  { 2026-04-16 05:39:23.350202 | orchestrator |  "data": "osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9", 2026-04-16 05:39:23.350220 | orchestrator |  "data_vg": "ceph-44db58af-23ca-547e-81cd-90c78ecf63d9" 2026-04-16 05:39:23.350239 | orchestrator |  } 2026-04-16 05:39:23.350258 | orchestrator |  ] 2026-04-16 05:39:23.350276 | orchestrator |  } 2026-04-16 05:39:23.350319 | orchestrator | } 2026-04-16 05:39:23.350339 | orchestrator | 2026-04-16 05:39:23.350358 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-04-16 05:39:23.350370 | orchestrator | Thursday 16 April 2026 05:39:22 +0000 (0:00:00.212) 0:00:40.149 ******** 2026-04-16 05:39:23.350381 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-16 05:39:23.350392 | orchestrator | 2026-04-16 05:39:23.350403 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:39:23.350414 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 05:39:23.350426 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 05:39:23.350437 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 05:39:23.350448 | orchestrator | 2026-04-16 05:39:23.350459 | orchestrator | 2026-04-16 05:39:23.350469 | orchestrator | 2026-04-16 05:39:23.350480 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:39:23.350491 | orchestrator | Thursday 16 April 2026 05:39:23 +0000 (0:00:00.965) 0:00:41.114 ******** 2026-04-16 05:39:23.350502 | orchestrator | =============================================================================== 2026-04-16 05:39:23.350512 | orchestrator | Write configuration file ------------------------------------------------ 3.82s 2026-04-16 05:39:23.350523 | orchestrator | Add known partitions to the list of available block devices ------------- 1.69s 2026-04-16 05:39:23.350546 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2026-04-16 05:39:23.350557 | orchestrator | Print configuration data ------------------------------------------------ 1.05s 2026-04-16 05:39:23.350568 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-04-16 05:39:23.350578 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2026-04-16 05:39:23.350589 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-04-16 05:39:23.350600 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-04-16 05:39:23.350610 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2026-04-16 05:39:23.350621 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2026-04-16 05:39:23.350631 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-04-16 05:39:23.350642 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-04-16 05:39:23.350653 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.63s 2026-04-16 05:39:23.350678 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-04-16 05:39:23.782119 | orchestrator | Set OSD devices config data --------------------------------------------- 0.61s 2026-04-16 05:39:23.782232 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-04-16 05:39:23.782247 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.60s 2026-04-16 05:39:23.782276 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-04-16 05:39:23.782287 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2026-04-16 05:39:23.782346 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2026-04-16 05:39:46.205378 | orchestrator | 2026-04-16 05:39:46 | INFO  | Task b2a512d2-d9d8-42da-8197-84867727bd37 (sync inventory) is running in background. Output coming soon. 2026-04-16 05:40:11.595321 | orchestrator | 2026-04-16 05:39:47 | INFO  | Starting group_vars file reorganization 2026-04-16 05:40:11.595462 | orchestrator | 2026-04-16 05:39:47 | INFO  | Moved 0 file(s) to their respective directories 2026-04-16 05:40:11.595491 | orchestrator | 2026-04-16 05:39:47 | INFO  | Group_vars file reorganization completed 2026-04-16 05:40:11.595510 | orchestrator | 2026-04-16 05:39:50 | INFO  | Starting variable preparation from inventory 2026-04-16 05:40:11.595528 | orchestrator | 2026-04-16 05:39:52 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-16 05:40:11.595549 | orchestrator | 2026-04-16 05:39:52 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-16 05:40:11.595568 | orchestrator | 2026-04-16 05:39:52 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-16 05:40:11.595587 | orchestrator | 2026-04-16 05:39:52 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-16 05:40:11.595607 | orchestrator | 2026-04-16 05:39:52 | INFO  | Variable preparation completed 2026-04-16 05:40:11.595629 | orchestrator | 2026-04-16 05:39:54 | INFO  | Starting inventory overwrite handling 2026-04-16 05:40:11.595648 | orchestrator | 2026-04-16 05:39:54 | INFO  | Handling group overwrites in 99-overwrite 2026-04-16 05:40:11.595668 | orchestrator | 2026-04-16 05:39:54 | INFO  | Removing group frr:children from 60-generic 2026-04-16 05:40:11.595689 | orchestrator | 2026-04-16 05:39:54 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-16 05:40:11.595705 | orchestrator | 2026-04-16 05:39:54 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-16 05:40:11.595741 | orchestrator | 2026-04-16 05:39:54 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-16 05:40:11.595752 | orchestrator | 2026-04-16 05:39:54 | INFO  | Handling group overwrites in 20-roles 2026-04-16 05:40:11.595763 | orchestrator | 2026-04-16 05:39:54 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-16 05:40:11.595774 | orchestrator | 2026-04-16 05:39:54 | INFO  | Removed 5 group(s) in total 2026-04-16 05:40:11.595785 | orchestrator | 2026-04-16 05:39:54 | INFO  | Inventory overwrite handling completed 2026-04-16 05:40:11.595795 | orchestrator | 2026-04-16 05:39:55 | INFO  | Starting merge of inventory files 2026-04-16 05:40:11.595806 | orchestrator | 2026-04-16 05:39:55 | INFO  | Inventory files merged successfully 2026-04-16 05:40:11.595819 | orchestrator | 2026-04-16 05:39:59 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-16 05:40:11.595833 | orchestrator | 2026-04-16 05:40:10 | INFO  | Successfully wrote ClusterShell configuration 2026-04-16 05:40:11.595846 | orchestrator | [master c6b5d3a] 2026-04-16-05-40 2026-04-16 05:40:11.595859 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-04-16 05:40:13.931026 | orchestrator | 2026-04-16 05:40:13 | INFO  | Task e3b21136-627d-4b9f-b08b-ef64e024383c (ceph-create-lvm-devices) was prepared for execution. 2026-04-16 05:40:13.931157 | orchestrator | 2026-04-16 05:40:13 | INFO  | It takes a moment until task e3b21136-627d-4b9f-b08b-ef64e024383c (ceph-create-lvm-devices) has been started and output is visible here. 2026-04-16 05:40:25.314345 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-16 05:40:25.314481 | orchestrator | 2.16.14 2026-04-16 05:40:25.314498 | orchestrator | 2026-04-16 05:40:25.314511 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-16 05:40:25.314523 | orchestrator | 2026-04-16 05:40:25.314535 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 05:40:25.314547 | orchestrator | Thursday 16 April 2026 05:40:18 +0000 (0:00:00.319) 0:00:00.319 ******** 2026-04-16 05:40:25.314558 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 05:40:25.314570 | orchestrator | 2026-04-16 05:40:25.314581 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-16 05:40:25.314592 | orchestrator | Thursday 16 April 2026 05:40:18 +0000 (0:00:00.245) 0:00:00.565 ******** 2026-04-16 05:40:25.314603 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:25.314614 | orchestrator | 2026-04-16 05:40:25.314625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.314636 | orchestrator | Thursday 16 April 2026 05:40:18 +0000 (0:00:00.252) 0:00:00.818 ******** 2026-04-16 05:40:25.314647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-04-16 05:40:25.314658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-04-16 05:40:25.314684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-04-16 05:40:25.314695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-04-16 05:40:25.314706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-04-16 05:40:25.314717 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-04-16 05:40:25.314728 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-04-16 05:40:25.314738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-04-16 05:40:25.314749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-04-16 05:40:25.314760 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-04-16 05:40:25.314791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-04-16 05:40:25.314802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-04-16 05:40:25.314812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-04-16 05:40:25.314823 | orchestrator | 2026-04-16 05:40:25.314836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.314849 | orchestrator | Thursday 16 April 2026 05:40:19 +0000 (0:00:00.502) 0:00:01.321 ******** 2026-04-16 05:40:25.314862 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.314875 | orchestrator | 2026-04-16 05:40:25.314887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.314899 | orchestrator | Thursday 16 April 2026 05:40:19 +0000 (0:00:00.197) 0:00:01.519 ******** 2026-04-16 05:40:25.314911 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.314924 | orchestrator | 2026-04-16 05:40:25.314936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.314950 | orchestrator | Thursday 16 April 2026 05:40:19 +0000 (0:00:00.193) 0:00:01.712 ******** 2026-04-16 05:40:25.314962 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.314974 | orchestrator | 2026-04-16 05:40:25.314986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.314999 | orchestrator | Thursday 16 April 2026 05:40:19 +0000 (0:00:00.201) 0:00:01.914 ******** 2026-04-16 05:40:25.315012 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315032 | orchestrator | 2026-04-16 05:40:25.315051 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315069 | orchestrator | Thursday 16 April 2026 05:40:19 +0000 (0:00:00.189) 0:00:02.103 ******** 2026-04-16 05:40:25.315088 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315106 | orchestrator | 2026-04-16 05:40:25.315125 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315144 | orchestrator | Thursday 16 April 2026 05:40:20 +0000 (0:00:00.212) 0:00:02.315 ******** 2026-04-16 05:40:25.315164 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315184 | orchestrator | 2026-04-16 05:40:25.315202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315220 | orchestrator | Thursday 16 April 2026 05:40:20 +0000 (0:00:00.190) 0:00:02.506 ******** 2026-04-16 05:40:25.315278 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315297 | orchestrator | 2026-04-16 05:40:25.315315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315332 | orchestrator | Thursday 16 April 2026 05:40:20 +0000 (0:00:00.222) 0:00:02.728 ******** 2026-04-16 05:40:25.315349 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315365 | orchestrator | 2026-04-16 05:40:25.315381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315398 | orchestrator | Thursday 16 April 2026 05:40:20 +0000 (0:00:00.189) 0:00:02.917 ******** 2026-04-16 05:40:25.315418 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64) 2026-04-16 05:40:25.315439 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64) 2026-04-16 05:40:25.315459 | orchestrator | 2026-04-16 05:40:25.315470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315501 | orchestrator | Thursday 16 April 2026 05:40:21 +0000 (0:00:00.392) 0:00:03.310 ******** 2026-04-16 05:40:25.315513 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d) 2026-04-16 05:40:25.315523 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d) 2026-04-16 05:40:25.315534 | orchestrator | 2026-04-16 05:40:25.315545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315567 | orchestrator | Thursday 16 April 2026 05:40:21 +0000 (0:00:00.577) 0:00:03.888 ******** 2026-04-16 05:40:25.315578 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834) 2026-04-16 05:40:25.315589 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834) 2026-04-16 05:40:25.315599 | orchestrator | 2026-04-16 05:40:25.315610 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315620 | orchestrator | Thursday 16 April 2026 05:40:22 +0000 (0:00:00.630) 0:00:04.518 ******** 2026-04-16 05:40:25.315631 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb) 2026-04-16 05:40:25.315641 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb) 2026-04-16 05:40:25.315652 | orchestrator | 2026-04-16 05:40:25.315670 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:25.315682 | orchestrator | Thursday 16 April 2026 05:40:23 +0000 (0:00:00.801) 0:00:05.320 ******** 2026-04-16 05:40:25.315693 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-16 05:40:25.315703 | orchestrator | 2026-04-16 05:40:25.315714 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.315725 | orchestrator | Thursday 16 April 2026 05:40:23 +0000 (0:00:00.326) 0:00:05.647 ******** 2026-04-16 05:40:25.315735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-04-16 05:40:25.315746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-04-16 05:40:25.315756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-04-16 05:40:25.315767 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-04-16 05:40:25.315777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-04-16 05:40:25.315788 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-04-16 05:40:25.315798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-04-16 05:40:25.315808 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-04-16 05:40:25.315819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-04-16 05:40:25.315829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-04-16 05:40:25.315840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-04-16 05:40:25.315850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-04-16 05:40:25.315861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-04-16 05:40:25.315871 | orchestrator | 2026-04-16 05:40:25.315882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.315893 | orchestrator | Thursday 16 April 2026 05:40:23 +0000 (0:00:00.395) 0:00:06.043 ******** 2026-04-16 05:40:25.315903 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315915 | orchestrator | 2026-04-16 05:40:25.315932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.315950 | orchestrator | Thursday 16 April 2026 05:40:24 +0000 (0:00:00.204) 0:00:06.247 ******** 2026-04-16 05:40:25.315969 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.315990 | orchestrator | 2026-04-16 05:40:25.316001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.316012 | orchestrator | Thursday 16 April 2026 05:40:24 +0000 (0:00:00.200) 0:00:06.447 ******** 2026-04-16 05:40:25.316023 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.316033 | orchestrator | 2026-04-16 05:40:25.316052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.316062 | orchestrator | Thursday 16 April 2026 05:40:24 +0000 (0:00:00.203) 0:00:06.651 ******** 2026-04-16 05:40:25.316073 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.316084 | orchestrator | 2026-04-16 05:40:25.316094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.316105 | orchestrator | Thursday 16 April 2026 05:40:24 +0000 (0:00:00.196) 0:00:06.847 ******** 2026-04-16 05:40:25.316116 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.316126 | orchestrator | 2026-04-16 05:40:25.316137 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.316147 | orchestrator | Thursday 16 April 2026 05:40:24 +0000 (0:00:00.194) 0:00:07.042 ******** 2026-04-16 05:40:25.316158 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.316169 | orchestrator | 2026-04-16 05:40:25.316179 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:25.316190 | orchestrator | Thursday 16 April 2026 05:40:25 +0000 (0:00:00.195) 0:00:07.238 ******** 2026-04-16 05:40:25.316201 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:25.316212 | orchestrator | 2026-04-16 05:40:25.316277 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:33.354993 | orchestrator | Thursday 16 April 2026 05:40:25 +0000 (0:00:00.204) 0:00:07.442 ******** 2026-04-16 05:40:33.355082 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355093 | orchestrator | 2026-04-16 05:40:33.355101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:33.355109 | orchestrator | Thursday 16 April 2026 05:40:25 +0000 (0:00:00.608) 0:00:08.050 ******** 2026-04-16 05:40:33.355116 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-04-16 05:40:33.355124 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-04-16 05:40:33.355131 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-04-16 05:40:33.355138 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-04-16 05:40:33.355144 | orchestrator | 2026-04-16 05:40:33.355151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:33.355158 | orchestrator | Thursday 16 April 2026 05:40:26 +0000 (0:00:00.651) 0:00:08.701 ******** 2026-04-16 05:40:33.355165 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355171 | orchestrator | 2026-04-16 05:40:33.355178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:33.355185 | orchestrator | Thursday 16 April 2026 05:40:26 +0000 (0:00:00.209) 0:00:08.911 ******** 2026-04-16 05:40:33.355191 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355198 | orchestrator | 2026-04-16 05:40:33.355204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:33.355276 | orchestrator | Thursday 16 April 2026 05:40:26 +0000 (0:00:00.203) 0:00:09.114 ******** 2026-04-16 05:40:33.355285 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355292 | orchestrator | 2026-04-16 05:40:33.355299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:33.355306 | orchestrator | Thursday 16 April 2026 05:40:27 +0000 (0:00:00.204) 0:00:09.319 ******** 2026-04-16 05:40:33.355312 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355319 | orchestrator | 2026-04-16 05:40:33.355326 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-16 05:40:33.355332 | orchestrator | Thursday 16 April 2026 05:40:27 +0000 (0:00:00.196) 0:00:09.515 ******** 2026-04-16 05:40:33.355339 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355345 | orchestrator | 2026-04-16 05:40:33.355352 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-16 05:40:33.355359 | orchestrator | Thursday 16 April 2026 05:40:27 +0000 (0:00:00.137) 0:00:09.653 ******** 2026-04-16 05:40:33.355366 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}}) 2026-04-16 05:40:33.355388 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}}) 2026-04-16 05:40:33.355395 | orchestrator | 2026-04-16 05:40:33.355401 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-16 05:40:33.355409 | orchestrator | Thursday 16 April 2026 05:40:27 +0000 (0:00:00.187) 0:00:09.841 ******** 2026-04-16 05:40:33.355417 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}) 2026-04-16 05:40:33.355426 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}) 2026-04-16 05:40:33.355432 | orchestrator | 2026-04-16 05:40:33.355439 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-16 05:40:33.355445 | orchestrator | Thursday 16 April 2026 05:40:29 +0000 (0:00:01.970) 0:00:11.812 ******** 2026-04-16 05:40:33.355452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355460 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355467 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355473 | orchestrator | 2026-04-16 05:40:33.355480 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-16 05:40:33.355487 | orchestrator | Thursday 16 April 2026 05:40:29 +0000 (0:00:00.164) 0:00:11.976 ******** 2026-04-16 05:40:33.355493 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}) 2026-04-16 05:40:33.355500 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}) 2026-04-16 05:40:33.355506 | orchestrator | 2026-04-16 05:40:33.355513 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-16 05:40:33.355520 | orchestrator | Thursday 16 April 2026 05:40:31 +0000 (0:00:01.501) 0:00:13.477 ******** 2026-04-16 05:40:33.355526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355540 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355546 | orchestrator | 2026-04-16 05:40:33.355554 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-16 05:40:33.355562 | orchestrator | Thursday 16 April 2026 05:40:31 +0000 (0:00:00.174) 0:00:13.652 ******** 2026-04-16 05:40:33.355582 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355590 | orchestrator | 2026-04-16 05:40:33.355598 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-16 05:40:33.355606 | orchestrator | Thursday 16 April 2026 05:40:31 +0000 (0:00:00.332) 0:00:13.984 ******** 2026-04-16 05:40:33.355613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355621 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355629 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355636 | orchestrator | 2026-04-16 05:40:33.355643 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-16 05:40:33.355651 | orchestrator | Thursday 16 April 2026 05:40:31 +0000 (0:00:00.153) 0:00:14.138 ******** 2026-04-16 05:40:33.355664 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355672 | orchestrator | 2026-04-16 05:40:33.355680 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-16 05:40:33.355688 | orchestrator | Thursday 16 April 2026 05:40:32 +0000 (0:00:00.137) 0:00:14.275 ******** 2026-04-16 05:40:33.355700 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355707 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355715 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355723 | orchestrator | 2026-04-16 05:40:33.355731 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-16 05:40:33.355738 | orchestrator | Thursday 16 April 2026 05:40:32 +0000 (0:00:00.154) 0:00:14.430 ******** 2026-04-16 05:40:33.355746 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355754 | orchestrator | 2026-04-16 05:40:33.355761 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-16 05:40:33.355768 | orchestrator | Thursday 16 April 2026 05:40:32 +0000 (0:00:00.133) 0:00:14.564 ******** 2026-04-16 05:40:33.355776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355783 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355791 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355798 | orchestrator | 2026-04-16 05:40:33.355805 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-16 05:40:33.355812 | orchestrator | Thursday 16 April 2026 05:40:32 +0000 (0:00:00.156) 0:00:14.720 ******** 2026-04-16 05:40:33.355819 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:33.355827 | orchestrator | 2026-04-16 05:40:33.355834 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-16 05:40:33.355841 | orchestrator | Thursday 16 April 2026 05:40:32 +0000 (0:00:00.143) 0:00:14.863 ******** 2026-04-16 05:40:33.355848 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355855 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355862 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355869 | orchestrator | 2026-04-16 05:40:33.355876 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-16 05:40:33.355883 | orchestrator | Thursday 16 April 2026 05:40:32 +0000 (0:00:00.160) 0:00:15.024 ******** 2026-04-16 05:40:33.355891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355898 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355905 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355911 | orchestrator | 2026-04-16 05:40:33.355917 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-16 05:40:33.355923 | orchestrator | Thursday 16 April 2026 05:40:33 +0000 (0:00:00.164) 0:00:15.188 ******** 2026-04-16 05:40:33.355929 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:33.355935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:33.355945 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355952 | orchestrator | 2026-04-16 05:40:33.355958 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-16 05:40:33.355964 | orchestrator | Thursday 16 April 2026 05:40:33 +0000 (0:00:00.163) 0:00:15.352 ******** 2026-04-16 05:40:33.355970 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:33.355976 | orchestrator | 2026-04-16 05:40:33.355982 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-16 05:40:33.355992 | orchestrator | Thursday 16 April 2026 05:40:33 +0000 (0:00:00.133) 0:00:15.485 ******** 2026-04-16 05:40:39.921857 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.921956 | orchestrator | 2026-04-16 05:40:39.921970 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-16 05:40:39.921982 | orchestrator | Thursday 16 April 2026 05:40:33 +0000 (0:00:00.141) 0:00:15.627 ******** 2026-04-16 05:40:39.921992 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922002 | orchestrator | 2026-04-16 05:40:39.922012 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-16 05:40:39.922077 | orchestrator | Thursday 16 April 2026 05:40:33 +0000 (0:00:00.333) 0:00:15.960 ******** 2026-04-16 05:40:39.922087 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 05:40:39.922098 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-16 05:40:39.922108 | orchestrator | } 2026-04-16 05:40:39.922118 | orchestrator | 2026-04-16 05:40:39.922128 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-16 05:40:39.922138 | orchestrator | Thursday 16 April 2026 05:40:33 +0000 (0:00:00.150) 0:00:16.111 ******** 2026-04-16 05:40:39.922148 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 05:40:39.922158 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-16 05:40:39.922168 | orchestrator | } 2026-04-16 05:40:39.922177 | orchestrator | 2026-04-16 05:40:39.922187 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-16 05:40:39.922264 | orchestrator | Thursday 16 April 2026 05:40:34 +0000 (0:00:00.147) 0:00:16.259 ******** 2026-04-16 05:40:39.922277 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 05:40:39.922287 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-16 05:40:39.922297 | orchestrator | } 2026-04-16 05:40:39.922307 | orchestrator | 2026-04-16 05:40:39.922316 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-16 05:40:39.922326 | orchestrator | Thursday 16 April 2026 05:40:34 +0000 (0:00:00.143) 0:00:16.403 ******** 2026-04-16 05:40:39.922336 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:39.922346 | orchestrator | 2026-04-16 05:40:39.922355 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-16 05:40:39.922365 | orchestrator | Thursday 16 April 2026 05:40:34 +0000 (0:00:00.667) 0:00:17.070 ******** 2026-04-16 05:40:39.922375 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:39.922386 | orchestrator | 2026-04-16 05:40:39.922397 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-16 05:40:39.922409 | orchestrator | Thursday 16 April 2026 05:40:35 +0000 (0:00:00.562) 0:00:17.633 ******** 2026-04-16 05:40:39.922420 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:39.922431 | orchestrator | 2026-04-16 05:40:39.922468 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-16 05:40:39.922479 | orchestrator | Thursday 16 April 2026 05:40:36 +0000 (0:00:00.602) 0:00:18.235 ******** 2026-04-16 05:40:39.922490 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:39.922501 | orchestrator | 2026-04-16 05:40:39.922512 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-16 05:40:39.922524 | orchestrator | Thursday 16 April 2026 05:40:36 +0000 (0:00:00.146) 0:00:18.382 ******** 2026-04-16 05:40:39.922540 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922557 | orchestrator | 2026-04-16 05:40:39.922581 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-16 05:40:39.922630 | orchestrator | Thursday 16 April 2026 05:40:36 +0000 (0:00:00.122) 0:00:18.505 ******** 2026-04-16 05:40:39.922648 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922665 | orchestrator | 2026-04-16 05:40:39.922682 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-16 05:40:39.922698 | orchestrator | Thursday 16 April 2026 05:40:36 +0000 (0:00:00.113) 0:00:18.618 ******** 2026-04-16 05:40:39.922715 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 05:40:39.922732 | orchestrator |  "vgs_report": { 2026-04-16 05:40:39.922751 | orchestrator |  "vg": [] 2026-04-16 05:40:39.922768 | orchestrator |  } 2026-04-16 05:40:39.922784 | orchestrator | } 2026-04-16 05:40:39.922794 | orchestrator | 2026-04-16 05:40:39.922803 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-16 05:40:39.922813 | orchestrator | Thursday 16 April 2026 05:40:36 +0000 (0:00:00.145) 0:00:18.764 ******** 2026-04-16 05:40:39.922822 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922832 | orchestrator | 2026-04-16 05:40:39.922841 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-16 05:40:39.922851 | orchestrator | Thursday 16 April 2026 05:40:36 +0000 (0:00:00.131) 0:00:18.896 ******** 2026-04-16 05:40:39.922860 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922870 | orchestrator | 2026-04-16 05:40:39.922879 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-16 05:40:39.922889 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.350) 0:00:19.246 ******** 2026-04-16 05:40:39.922898 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922907 | orchestrator | 2026-04-16 05:40:39.922917 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-16 05:40:39.922926 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.131) 0:00:19.378 ******** 2026-04-16 05:40:39.922936 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922945 | orchestrator | 2026-04-16 05:40:39.922954 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-16 05:40:39.922964 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.137) 0:00:19.515 ******** 2026-04-16 05:40:39.922973 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.922982 | orchestrator | 2026-04-16 05:40:39.922992 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-16 05:40:39.923001 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.134) 0:00:19.650 ******** 2026-04-16 05:40:39.923010 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923020 | orchestrator | 2026-04-16 05:40:39.923029 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-16 05:40:39.923038 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.139) 0:00:19.790 ******** 2026-04-16 05:40:39.923048 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923057 | orchestrator | 2026-04-16 05:40:39.923066 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-16 05:40:39.923076 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.130) 0:00:19.920 ******** 2026-04-16 05:40:39.923104 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923114 | orchestrator | 2026-04-16 05:40:39.923123 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-16 05:40:39.923133 | orchestrator | Thursday 16 April 2026 05:40:37 +0000 (0:00:00.138) 0:00:20.059 ******** 2026-04-16 05:40:39.923142 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923152 | orchestrator | 2026-04-16 05:40:39.923161 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-16 05:40:39.923171 | orchestrator | Thursday 16 April 2026 05:40:38 +0000 (0:00:00.120) 0:00:20.179 ******** 2026-04-16 05:40:39.923180 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923190 | orchestrator | 2026-04-16 05:40:39.923199 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-16 05:40:39.923233 | orchestrator | Thursday 16 April 2026 05:40:38 +0000 (0:00:00.143) 0:00:20.323 ******** 2026-04-16 05:40:39.923252 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923262 | orchestrator | 2026-04-16 05:40:39.923272 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-16 05:40:39.923281 | orchestrator | Thursday 16 April 2026 05:40:38 +0000 (0:00:00.147) 0:00:20.470 ******** 2026-04-16 05:40:39.923291 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923300 | orchestrator | 2026-04-16 05:40:39.923318 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-16 05:40:39.923328 | orchestrator | Thursday 16 April 2026 05:40:38 +0000 (0:00:00.144) 0:00:20.615 ******** 2026-04-16 05:40:39.923337 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923346 | orchestrator | 2026-04-16 05:40:39.923356 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-16 05:40:39.923366 | orchestrator | Thursday 16 April 2026 05:40:38 +0000 (0:00:00.138) 0:00:20.754 ******** 2026-04-16 05:40:39.923375 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923384 | orchestrator | 2026-04-16 05:40:39.923394 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-16 05:40:39.923404 | orchestrator | Thursday 16 April 2026 05:40:38 +0000 (0:00:00.309) 0:00:21.063 ******** 2026-04-16 05:40:39.923415 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:39.923427 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:39.923436 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923445 | orchestrator | 2026-04-16 05:40:39.923455 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-16 05:40:39.923465 | orchestrator | Thursday 16 April 2026 05:40:39 +0000 (0:00:00.149) 0:00:21.212 ******** 2026-04-16 05:40:39.923474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:39.923484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:39.923493 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923503 | orchestrator | 2026-04-16 05:40:39.923512 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-16 05:40:39.923522 | orchestrator | Thursday 16 April 2026 05:40:39 +0000 (0:00:00.147) 0:00:21.359 ******** 2026-04-16 05:40:39.923532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:39.923541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:39.923551 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923560 | orchestrator | 2026-04-16 05:40:39.923570 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-16 05:40:39.923579 | orchestrator | Thursday 16 April 2026 05:40:39 +0000 (0:00:00.172) 0:00:21.532 ******** 2026-04-16 05:40:39.923589 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:39.923598 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:39.923608 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923617 | orchestrator | 2026-04-16 05:40:39.923627 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-16 05:40:39.923636 | orchestrator | Thursday 16 April 2026 05:40:39 +0000 (0:00:00.191) 0:00:21.723 ******** 2026-04-16 05:40:39.923652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:39.923661 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:39.923671 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:39.923680 | orchestrator | 2026-04-16 05:40:39.923690 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-16 05:40:39.923699 | orchestrator | Thursday 16 April 2026 05:40:39 +0000 (0:00:00.172) 0:00:21.895 ******** 2026-04-16 05:40:39.923716 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:45.071472 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:45.071587 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:45.071604 | orchestrator | 2026-04-16 05:40:45.071616 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-16 05:40:45.071629 | orchestrator | Thursday 16 April 2026 05:40:39 +0000 (0:00:00.158) 0:00:22.053 ******** 2026-04-16 05:40:45.071641 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:45.071652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:45.071663 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:45.071674 | orchestrator | 2026-04-16 05:40:45.071702 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-16 05:40:45.071713 | orchestrator | Thursday 16 April 2026 05:40:40 +0000 (0:00:00.160) 0:00:22.214 ******** 2026-04-16 05:40:45.071724 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:45.071735 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:45.071746 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:45.071756 | orchestrator | 2026-04-16 05:40:45.071767 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-16 05:40:45.071778 | orchestrator | Thursday 16 April 2026 05:40:40 +0000 (0:00:00.146) 0:00:22.361 ******** 2026-04-16 05:40:45.071789 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:45.071801 | orchestrator | 2026-04-16 05:40:45.071811 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-16 05:40:45.071822 | orchestrator | Thursday 16 April 2026 05:40:40 +0000 (0:00:00.532) 0:00:22.894 ******** 2026-04-16 05:40:45.071833 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:45.071843 | orchestrator | 2026-04-16 05:40:45.071854 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-16 05:40:45.071865 | orchestrator | Thursday 16 April 2026 05:40:41 +0000 (0:00:00.536) 0:00:23.431 ******** 2026-04-16 05:40:45.071876 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:40:45.071886 | orchestrator | 2026-04-16 05:40:45.071897 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-16 05:40:45.071909 | orchestrator | Thursday 16 April 2026 05:40:41 +0000 (0:00:00.148) 0:00:23.579 ******** 2026-04-16 05:40:45.071920 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'vg_name': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}) 2026-04-16 05:40:45.071932 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'vg_name': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}) 2026-04-16 05:40:45.071964 | orchestrator | 2026-04-16 05:40:45.071976 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-16 05:40:45.071986 | orchestrator | Thursday 16 April 2026 05:40:41 +0000 (0:00:00.176) 0:00:23.756 ******** 2026-04-16 05:40:45.071998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:45.072012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:45.072024 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:45.072037 | orchestrator | 2026-04-16 05:40:45.072050 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-16 05:40:45.072063 | orchestrator | Thursday 16 April 2026 05:40:41 +0000 (0:00:00.340) 0:00:24.096 ******** 2026-04-16 05:40:45.072076 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:45.072089 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:45.072101 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:45.072114 | orchestrator | 2026-04-16 05:40:45.072126 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-16 05:40:45.072139 | orchestrator | Thursday 16 April 2026 05:40:42 +0000 (0:00:00.163) 0:00:24.259 ******** 2026-04-16 05:40:45.072152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 05:40:45.072165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 05:40:45.072178 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:40:45.072190 | orchestrator | 2026-04-16 05:40:45.072230 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-16 05:40:45.072265 | orchestrator | Thursday 16 April 2026 05:40:42 +0000 (0:00:00.181) 0:00:24.440 ******** 2026-04-16 05:40:45.072306 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 05:40:45.072320 | orchestrator |  "lvm_report": { 2026-04-16 05:40:45.072333 | orchestrator |  "lv": [ 2026-04-16 05:40:45.072345 | orchestrator |  { 2026-04-16 05:40:45.072358 | orchestrator |  "lv_name": "osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab", 2026-04-16 05:40:45.072369 | orchestrator |  "vg_name": "ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab" 2026-04-16 05:40:45.072380 | orchestrator |  }, 2026-04-16 05:40:45.072391 | orchestrator |  { 2026-04-16 05:40:45.072402 | orchestrator |  "lv_name": "osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9", 2026-04-16 05:40:45.072412 | orchestrator |  "vg_name": "ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9" 2026-04-16 05:40:45.072423 | orchestrator |  } 2026-04-16 05:40:45.072434 | orchestrator |  ], 2026-04-16 05:40:45.072445 | orchestrator |  "pv": [ 2026-04-16 05:40:45.072456 | orchestrator |  { 2026-04-16 05:40:45.072466 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-16 05:40:45.072477 | orchestrator |  "vg_name": "ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9" 2026-04-16 05:40:45.072488 | orchestrator |  }, 2026-04-16 05:40:45.072499 | orchestrator |  { 2026-04-16 05:40:45.072516 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-16 05:40:45.072528 | orchestrator |  "vg_name": "ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab" 2026-04-16 05:40:45.072539 | orchestrator |  } 2026-04-16 05:40:45.072549 | orchestrator |  ] 2026-04-16 05:40:45.072560 | orchestrator |  } 2026-04-16 05:40:45.072572 | orchestrator | } 2026-04-16 05:40:45.072583 | orchestrator | 2026-04-16 05:40:45.072600 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-16 05:40:45.072611 | orchestrator | 2026-04-16 05:40:45.072622 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 05:40:45.072633 | orchestrator | Thursday 16 April 2026 05:40:42 +0000 (0:00:00.294) 0:00:24.735 ******** 2026-04-16 05:40:45.072645 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-04-16 05:40:45.072656 | orchestrator | 2026-04-16 05:40:45.072667 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-16 05:40:45.072678 | orchestrator | Thursday 16 April 2026 05:40:42 +0000 (0:00:00.251) 0:00:24.987 ******** 2026-04-16 05:40:45.072689 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:40:45.072699 | orchestrator | 2026-04-16 05:40:45.072710 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.072721 | orchestrator | Thursday 16 April 2026 05:40:43 +0000 (0:00:00.235) 0:00:25.222 ******** 2026-04-16 05:40:45.072732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-04-16 05:40:45.072743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-04-16 05:40:45.072753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-04-16 05:40:45.072764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-04-16 05:40:45.072775 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-04-16 05:40:45.072785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-04-16 05:40:45.072796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-04-16 05:40:45.072807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-04-16 05:40:45.072818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-04-16 05:40:45.072829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-04-16 05:40:45.072840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-04-16 05:40:45.072850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-04-16 05:40:45.072861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-04-16 05:40:45.072872 | orchestrator | 2026-04-16 05:40:45.072883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.072893 | orchestrator | Thursday 16 April 2026 05:40:43 +0000 (0:00:00.405) 0:00:25.628 ******** 2026-04-16 05:40:45.072904 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:45.072915 | orchestrator | 2026-04-16 05:40:45.072926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.072937 | orchestrator | Thursday 16 April 2026 05:40:43 +0000 (0:00:00.187) 0:00:25.815 ******** 2026-04-16 05:40:45.072948 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:45.072958 | orchestrator | 2026-04-16 05:40:45.072969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.072980 | orchestrator | Thursday 16 April 2026 05:40:44 +0000 (0:00:00.568) 0:00:26.384 ******** 2026-04-16 05:40:45.072991 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:45.073002 | orchestrator | 2026-04-16 05:40:45.073013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.073024 | orchestrator | Thursday 16 April 2026 05:40:44 +0000 (0:00:00.204) 0:00:26.588 ******** 2026-04-16 05:40:45.073035 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:45.073046 | orchestrator | 2026-04-16 05:40:45.073057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.073068 | orchestrator | Thursday 16 April 2026 05:40:44 +0000 (0:00:00.206) 0:00:26.795 ******** 2026-04-16 05:40:45.073085 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:45.073096 | orchestrator | 2026-04-16 05:40:45.073107 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:45.073118 | orchestrator | Thursday 16 April 2026 05:40:44 +0000 (0:00:00.206) 0:00:27.001 ******** 2026-04-16 05:40:45.073129 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:45.073140 | orchestrator | 2026-04-16 05:40:45.073158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.847939 | orchestrator | Thursday 16 April 2026 05:40:45 +0000 (0:00:00.200) 0:00:27.201 ******** 2026-04-16 05:40:55.848054 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848072 | orchestrator | 2026-04-16 05:40:55.848085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.848096 | orchestrator | Thursday 16 April 2026 05:40:45 +0000 (0:00:00.198) 0:00:27.400 ******** 2026-04-16 05:40:55.848107 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848119 | orchestrator | 2026-04-16 05:40:55.848130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.848141 | orchestrator | Thursday 16 April 2026 05:40:45 +0000 (0:00:00.203) 0:00:27.603 ******** 2026-04-16 05:40:55.848152 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8) 2026-04-16 05:40:55.848164 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8) 2026-04-16 05:40:55.848175 | orchestrator | 2026-04-16 05:40:55.848252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.848267 | orchestrator | Thursday 16 April 2026 05:40:45 +0000 (0:00:00.419) 0:00:28.022 ******** 2026-04-16 05:40:55.848278 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13) 2026-04-16 05:40:55.848289 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13) 2026-04-16 05:40:55.848300 | orchestrator | 2026-04-16 05:40:55.848310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.848321 | orchestrator | Thursday 16 April 2026 05:40:46 +0000 (0:00:00.421) 0:00:28.444 ******** 2026-04-16 05:40:55.848332 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3) 2026-04-16 05:40:55.848343 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3) 2026-04-16 05:40:55.848354 | orchestrator | 2026-04-16 05:40:55.848365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.848375 | orchestrator | Thursday 16 April 2026 05:40:46 +0000 (0:00:00.650) 0:00:29.094 ******** 2026-04-16 05:40:55.848386 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99) 2026-04-16 05:40:55.848397 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99) 2026-04-16 05:40:55.848408 | orchestrator | 2026-04-16 05:40:55.848418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:40:55.848429 | orchestrator | Thursday 16 April 2026 05:40:47 +0000 (0:00:00.853) 0:00:29.947 ******** 2026-04-16 05:40:55.848440 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-16 05:40:55.848451 | orchestrator | 2026-04-16 05:40:55.848462 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848475 | orchestrator | Thursday 16 April 2026 05:40:48 +0000 (0:00:00.352) 0:00:30.300 ******** 2026-04-16 05:40:55.848488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-04-16 05:40:55.848502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-04-16 05:40:55.848514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-04-16 05:40:55.848547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-04-16 05:40:55.848560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-04-16 05:40:55.848573 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-04-16 05:40:55.848585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-04-16 05:40:55.848597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-04-16 05:40:55.848609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-04-16 05:40:55.848622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-04-16 05:40:55.848634 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-04-16 05:40:55.848647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-04-16 05:40:55.848659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-04-16 05:40:55.848672 | orchestrator | 2026-04-16 05:40:55.848684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848697 | orchestrator | Thursday 16 April 2026 05:40:48 +0000 (0:00:00.435) 0:00:30.736 ******** 2026-04-16 05:40:55.848709 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848721 | orchestrator | 2026-04-16 05:40:55.848735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848747 | orchestrator | Thursday 16 April 2026 05:40:48 +0000 (0:00:00.222) 0:00:30.958 ******** 2026-04-16 05:40:55.848760 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848772 | orchestrator | 2026-04-16 05:40:55.848785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848797 | orchestrator | Thursday 16 April 2026 05:40:49 +0000 (0:00:00.204) 0:00:31.163 ******** 2026-04-16 05:40:55.848809 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848822 | orchestrator | 2026-04-16 05:40:55.848852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848864 | orchestrator | Thursday 16 April 2026 05:40:49 +0000 (0:00:00.200) 0:00:31.363 ******** 2026-04-16 05:40:55.848875 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848885 | orchestrator | 2026-04-16 05:40:55.848896 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848907 | orchestrator | Thursday 16 April 2026 05:40:49 +0000 (0:00:00.202) 0:00:31.565 ******** 2026-04-16 05:40:55.848918 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848928 | orchestrator | 2026-04-16 05:40:55.848939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848950 | orchestrator | Thursday 16 April 2026 05:40:49 +0000 (0:00:00.196) 0:00:31.761 ******** 2026-04-16 05:40:55.848961 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.848972 | orchestrator | 2026-04-16 05:40:55.848982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.848993 | orchestrator | Thursday 16 April 2026 05:40:49 +0000 (0:00:00.201) 0:00:31.963 ******** 2026-04-16 05:40:55.849011 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849022 | orchestrator | 2026-04-16 05:40:55.849033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.849044 | orchestrator | Thursday 16 April 2026 05:40:50 +0000 (0:00:00.205) 0:00:32.168 ******** 2026-04-16 05:40:55.849054 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849065 | orchestrator | 2026-04-16 05:40:55.849076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.849087 | orchestrator | Thursday 16 April 2026 05:40:50 +0000 (0:00:00.604) 0:00:32.773 ******** 2026-04-16 05:40:55.849097 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-04-16 05:40:55.849122 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-04-16 05:40:55.849141 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-04-16 05:40:55.849157 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-04-16 05:40:55.849174 | orchestrator | 2026-04-16 05:40:55.849239 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.849260 | orchestrator | Thursday 16 April 2026 05:40:51 +0000 (0:00:00.667) 0:00:33.440 ******** 2026-04-16 05:40:55.849279 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849297 | orchestrator | 2026-04-16 05:40:55.849313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.849325 | orchestrator | Thursday 16 April 2026 05:40:51 +0000 (0:00:00.204) 0:00:33.645 ******** 2026-04-16 05:40:55.849336 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849346 | orchestrator | 2026-04-16 05:40:55.849357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.849368 | orchestrator | Thursday 16 April 2026 05:40:51 +0000 (0:00:00.215) 0:00:33.860 ******** 2026-04-16 05:40:55.849378 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849389 | orchestrator | 2026-04-16 05:40:55.849400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:40:55.849411 | orchestrator | Thursday 16 April 2026 05:40:51 +0000 (0:00:00.208) 0:00:34.069 ******** 2026-04-16 05:40:55.849421 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849432 | orchestrator | 2026-04-16 05:40:55.849443 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-16 05:40:55.849454 | orchestrator | Thursday 16 April 2026 05:40:52 +0000 (0:00:00.205) 0:00:34.274 ******** 2026-04-16 05:40:55.849464 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849475 | orchestrator | 2026-04-16 05:40:55.849486 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-16 05:40:55.849496 | orchestrator | Thursday 16 April 2026 05:40:52 +0000 (0:00:00.142) 0:00:34.416 ******** 2026-04-16 05:40:55.849507 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}}) 2026-04-16 05:40:55.849518 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '280a11fd-e83f-54f4-b253-754709c5cdf6'}}) 2026-04-16 05:40:55.849529 | orchestrator | 2026-04-16 05:40:55.849540 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-16 05:40:55.849550 | orchestrator | Thursday 16 April 2026 05:40:52 +0000 (0:00:00.194) 0:00:34.611 ******** 2026-04-16 05:40:55.849563 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}) 2026-04-16 05:40:55.849575 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}) 2026-04-16 05:40:55.849586 | orchestrator | 2026-04-16 05:40:55.849596 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-16 05:40:55.849607 | orchestrator | Thursday 16 April 2026 05:40:54 +0000 (0:00:01.845) 0:00:36.457 ******** 2026-04-16 05:40:55.849618 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:40:55.849630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:40:55.849641 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:40:55.849651 | orchestrator | 2026-04-16 05:40:55.849662 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-16 05:40:55.849673 | orchestrator | Thursday 16 April 2026 05:40:54 +0000 (0:00:00.151) 0:00:36.609 ******** 2026-04-16 05:40:55.849684 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}) 2026-04-16 05:40:55.849713 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}) 2026-04-16 05:41:01.615635 | orchestrator | 2026-04-16 05:41:01.615744 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-16 05:41:01.615762 | orchestrator | Thursday 16 April 2026 05:40:55 +0000 (0:00:01.364) 0:00:37.973 ******** 2026-04-16 05:41:01.615775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.615788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.615799 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.615811 | orchestrator | 2026-04-16 05:41:01.615838 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-16 05:41:01.615850 | orchestrator | Thursday 16 April 2026 05:40:56 +0000 (0:00:00.366) 0:00:38.339 ******** 2026-04-16 05:41:01.615861 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.615872 | orchestrator | 2026-04-16 05:41:01.615883 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-16 05:41:01.615894 | orchestrator | Thursday 16 April 2026 05:40:56 +0000 (0:00:00.146) 0:00:38.486 ******** 2026-04-16 05:41:01.615905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.615915 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.615926 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.615937 | orchestrator | 2026-04-16 05:41:01.615948 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-16 05:41:01.615959 | orchestrator | Thursday 16 April 2026 05:40:56 +0000 (0:00:00.157) 0:00:38.644 ******** 2026-04-16 05:41:01.615969 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.615980 | orchestrator | 2026-04-16 05:41:01.615991 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-16 05:41:01.616002 | orchestrator | Thursday 16 April 2026 05:40:56 +0000 (0:00:00.145) 0:00:38.789 ******** 2026-04-16 05:41:01.616012 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.616023 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.616034 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616045 | orchestrator | 2026-04-16 05:41:01.616057 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-16 05:41:01.616067 | orchestrator | Thursday 16 April 2026 05:40:56 +0000 (0:00:00.164) 0:00:38.953 ******** 2026-04-16 05:41:01.616078 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616089 | orchestrator | 2026-04-16 05:41:01.616099 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-16 05:41:01.616110 | orchestrator | Thursday 16 April 2026 05:40:56 +0000 (0:00:00.128) 0:00:39.082 ******** 2026-04-16 05:41:01.616121 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.616132 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.616142 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616153 | orchestrator | 2026-04-16 05:41:01.616166 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-16 05:41:01.616231 | orchestrator | Thursday 16 April 2026 05:40:57 +0000 (0:00:00.145) 0:00:39.227 ******** 2026-04-16 05:41:01.616246 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:01.616259 | orchestrator | 2026-04-16 05:41:01.616272 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-16 05:41:01.616285 | orchestrator | Thursday 16 April 2026 05:40:57 +0000 (0:00:00.121) 0:00:39.348 ******** 2026-04-16 05:41:01.616298 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.616311 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.616323 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616337 | orchestrator | 2026-04-16 05:41:01.616350 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-16 05:41:01.616363 | orchestrator | Thursday 16 April 2026 05:40:57 +0000 (0:00:00.143) 0:00:39.492 ******** 2026-04-16 05:41:01.616376 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.616389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.616401 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616414 | orchestrator | 2026-04-16 05:41:01.616427 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-16 05:41:01.616458 | orchestrator | Thursday 16 April 2026 05:40:57 +0000 (0:00:00.149) 0:00:39.641 ******** 2026-04-16 05:41:01.616471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:01.616484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:01.616497 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616509 | orchestrator | 2026-04-16 05:41:01.616522 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-16 05:41:01.616532 | orchestrator | Thursday 16 April 2026 05:40:57 +0000 (0:00:00.139) 0:00:39.781 ******** 2026-04-16 05:41:01.616543 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616554 | orchestrator | 2026-04-16 05:41:01.616570 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-16 05:41:01.616582 | orchestrator | Thursday 16 April 2026 05:40:57 +0000 (0:00:00.335) 0:00:40.116 ******** 2026-04-16 05:41:01.616592 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616603 | orchestrator | 2026-04-16 05:41:01.616614 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-16 05:41:01.616625 | orchestrator | Thursday 16 April 2026 05:40:58 +0000 (0:00:00.146) 0:00:40.263 ******** 2026-04-16 05:41:01.616635 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.616646 | orchestrator | 2026-04-16 05:41:01.616657 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-16 05:41:01.616668 | orchestrator | Thursday 16 April 2026 05:40:58 +0000 (0:00:00.145) 0:00:40.408 ******** 2026-04-16 05:41:01.616679 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 05:41:01.616690 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-16 05:41:01.616701 | orchestrator | } 2026-04-16 05:41:01.616715 | orchestrator | 2026-04-16 05:41:01.616733 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-16 05:41:01.616752 | orchestrator | Thursday 16 April 2026 05:40:58 +0000 (0:00:00.155) 0:00:40.564 ******** 2026-04-16 05:41:01.616769 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 05:41:01.616796 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-16 05:41:01.616830 | orchestrator | } 2026-04-16 05:41:01.616847 | orchestrator | 2026-04-16 05:41:01.616864 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-16 05:41:01.616882 | orchestrator | Thursday 16 April 2026 05:40:58 +0000 (0:00:00.149) 0:00:40.713 ******** 2026-04-16 05:41:01.616898 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 05:41:01.616917 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-16 05:41:01.616936 | orchestrator | } 2026-04-16 05:41:01.616953 | orchestrator | 2026-04-16 05:41:01.616972 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-16 05:41:01.616991 | orchestrator | Thursday 16 April 2026 05:40:58 +0000 (0:00:00.149) 0:00:40.863 ******** 2026-04-16 05:41:01.617008 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:01.617026 | orchestrator | 2026-04-16 05:41:01.617039 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-16 05:41:01.617050 | orchestrator | Thursday 16 April 2026 05:40:59 +0000 (0:00:00.566) 0:00:41.430 ******** 2026-04-16 05:41:01.617061 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:01.617071 | orchestrator | 2026-04-16 05:41:01.617082 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-16 05:41:01.617093 | orchestrator | Thursday 16 April 2026 05:40:59 +0000 (0:00:00.540) 0:00:41.970 ******** 2026-04-16 05:41:01.617104 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:01.617114 | orchestrator | 2026-04-16 05:41:01.617125 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-16 05:41:01.617135 | orchestrator | Thursday 16 April 2026 05:41:00 +0000 (0:00:00.537) 0:00:42.507 ******** 2026-04-16 05:41:01.617146 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:01.617163 | orchestrator | 2026-04-16 05:41:01.617181 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-16 05:41:01.617260 | orchestrator | Thursday 16 April 2026 05:41:00 +0000 (0:00:00.154) 0:00:42.662 ******** 2026-04-16 05:41:01.617280 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.617298 | orchestrator | 2026-04-16 05:41:01.617316 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-16 05:41:01.617335 | orchestrator | Thursday 16 April 2026 05:41:00 +0000 (0:00:00.104) 0:00:42.767 ******** 2026-04-16 05:41:01.617347 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.617358 | orchestrator | 2026-04-16 05:41:01.617369 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-16 05:41:01.617379 | orchestrator | Thursday 16 April 2026 05:41:00 +0000 (0:00:00.306) 0:00:43.074 ******** 2026-04-16 05:41:01.617417 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 05:41:01.617437 | orchestrator |  "vgs_report": { 2026-04-16 05:41:01.617471 | orchestrator |  "vg": [] 2026-04-16 05:41:01.617490 | orchestrator |  } 2026-04-16 05:41:01.617510 | orchestrator | } 2026-04-16 05:41:01.617529 | orchestrator | 2026-04-16 05:41:01.617547 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-16 05:41:01.617565 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.144) 0:00:43.218 ******** 2026-04-16 05:41:01.617584 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.617603 | orchestrator | 2026-04-16 05:41:01.617623 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-16 05:41:01.617641 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.131) 0:00:43.350 ******** 2026-04-16 05:41:01.617660 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.617676 | orchestrator | 2026-04-16 05:41:01.617694 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-16 05:41:01.617713 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.134) 0:00:43.485 ******** 2026-04-16 05:41:01.617733 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.617754 | orchestrator | 2026-04-16 05:41:01.617773 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-16 05:41:01.617794 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.128) 0:00:43.614 ******** 2026-04-16 05:41:01.617831 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:01.617852 | orchestrator | 2026-04-16 05:41:01.617902 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-16 05:41:06.301553 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.134) 0:00:43.748 ******** 2026-04-16 05:41:06.301686 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301702 | orchestrator | 2026-04-16 05:41:06.301715 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-16 05:41:06.301727 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.132) 0:00:43.881 ******** 2026-04-16 05:41:06.301738 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301749 | orchestrator | 2026-04-16 05:41:06.301760 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-16 05:41:06.301772 | orchestrator | Thursday 16 April 2026 05:41:01 +0000 (0:00:00.137) 0:00:44.019 ******** 2026-04-16 05:41:06.301782 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301793 | orchestrator | 2026-04-16 05:41:06.301824 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-16 05:41:06.301836 | orchestrator | Thursday 16 April 2026 05:41:02 +0000 (0:00:00.142) 0:00:44.162 ******** 2026-04-16 05:41:06.301847 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301857 | orchestrator | 2026-04-16 05:41:06.301868 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-16 05:41:06.301879 | orchestrator | Thursday 16 April 2026 05:41:02 +0000 (0:00:00.138) 0:00:44.300 ******** 2026-04-16 05:41:06.301890 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301901 | orchestrator | 2026-04-16 05:41:06.301912 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-16 05:41:06.301923 | orchestrator | Thursday 16 April 2026 05:41:02 +0000 (0:00:00.120) 0:00:44.420 ******** 2026-04-16 05:41:06.301934 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301944 | orchestrator | 2026-04-16 05:41:06.301955 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-16 05:41:06.301967 | orchestrator | Thursday 16 April 2026 05:41:02 +0000 (0:00:00.323) 0:00:44.743 ******** 2026-04-16 05:41:06.301977 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.301988 | orchestrator | 2026-04-16 05:41:06.301999 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-16 05:41:06.302010 | orchestrator | Thursday 16 April 2026 05:41:02 +0000 (0:00:00.139) 0:00:44.883 ******** 2026-04-16 05:41:06.302231 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302247 | orchestrator | 2026-04-16 05:41:06.302260 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-16 05:41:06.302273 | orchestrator | Thursday 16 April 2026 05:41:02 +0000 (0:00:00.134) 0:00:45.018 ******** 2026-04-16 05:41:06.302286 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302299 | orchestrator | 2026-04-16 05:41:06.302312 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-16 05:41:06.302324 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.148) 0:00:45.166 ******** 2026-04-16 05:41:06.302338 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302350 | orchestrator | 2026-04-16 05:41:06.302363 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-16 05:41:06.302376 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.147) 0:00:45.314 ******** 2026-04-16 05:41:06.302390 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302404 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302415 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302426 | orchestrator | 2026-04-16 05:41:06.302437 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-16 05:41:06.302476 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.172) 0:00:45.487 ******** 2026-04-16 05:41:06.302488 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302510 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302521 | orchestrator | 2026-04-16 05:41:06.302532 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-16 05:41:06.302542 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.152) 0:00:45.639 ******** 2026-04-16 05:41:06.302553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302575 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302586 | orchestrator | 2026-04-16 05:41:06.302598 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-16 05:41:06.302608 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.164) 0:00:45.804 ******** 2026-04-16 05:41:06.302619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302631 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302641 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302652 | orchestrator | 2026-04-16 05:41:06.302711 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-16 05:41:06.302724 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.153) 0:00:45.957 ******** 2026-04-16 05:41:06.302735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302746 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302756 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302767 | orchestrator | 2026-04-16 05:41:06.302786 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-16 05:41:06.302798 | orchestrator | Thursday 16 April 2026 05:41:03 +0000 (0:00:00.159) 0:00:46.117 ******** 2026-04-16 05:41:06.302809 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302831 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302841 | orchestrator | 2026-04-16 05:41:06.302852 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-16 05:41:06.302863 | orchestrator | Thursday 16 April 2026 05:41:04 +0000 (0:00:00.167) 0:00:46.285 ******** 2026-04-16 05:41:06.302874 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302885 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302896 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302907 | orchestrator | 2026-04-16 05:41:06.302926 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-16 05:41:06.302937 | orchestrator | Thursday 16 April 2026 05:41:04 +0000 (0:00:00.329) 0:00:46.614 ******** 2026-04-16 05:41:06.302948 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.302959 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.302970 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.302981 | orchestrator | 2026-04-16 05:41:06.302992 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-16 05:41:06.303003 | orchestrator | Thursday 16 April 2026 05:41:04 +0000 (0:00:00.157) 0:00:46.772 ******** 2026-04-16 05:41:06.303013 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:06.303025 | orchestrator | 2026-04-16 05:41:06.303036 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-16 05:41:06.303047 | orchestrator | Thursday 16 April 2026 05:41:05 +0000 (0:00:00.496) 0:00:47.268 ******** 2026-04-16 05:41:06.303058 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:06.303068 | orchestrator | 2026-04-16 05:41:06.303079 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-16 05:41:06.303090 | orchestrator | Thursday 16 April 2026 05:41:05 +0000 (0:00:00.525) 0:00:47.794 ******** 2026-04-16 05:41:06.303101 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:06.303112 | orchestrator | 2026-04-16 05:41:06.303123 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-16 05:41:06.303134 | orchestrator | Thursday 16 April 2026 05:41:05 +0000 (0:00:00.151) 0:00:47.946 ******** 2026-04-16 05:41:06.303145 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'vg_name': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}) 2026-04-16 05:41:06.303157 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'vg_name': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}) 2026-04-16 05:41:06.303168 | orchestrator | 2026-04-16 05:41:06.303179 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-16 05:41:06.303207 | orchestrator | Thursday 16 April 2026 05:41:05 +0000 (0:00:00.170) 0:00:48.117 ******** 2026-04-16 05:41:06.303218 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.303229 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:06.303240 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:06.303251 | orchestrator | 2026-04-16 05:41:06.303262 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-16 05:41:06.303273 | orchestrator | Thursday 16 April 2026 05:41:06 +0000 (0:00:00.165) 0:00:48.282 ******** 2026-04-16 05:41:06.303284 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:06.303302 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:12.529901 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:12.530114 | orchestrator | 2026-04-16 05:41:12.530136 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-16 05:41:12.530150 | orchestrator | Thursday 16 April 2026 05:41:06 +0000 (0:00:00.152) 0:00:48.435 ******** 2026-04-16 05:41:12.530163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 05:41:12.530269 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 05:41:12.530284 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:12.530295 | orchestrator | 2026-04-16 05:41:12.530307 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-16 05:41:12.530318 | orchestrator | Thursday 16 April 2026 05:41:06 +0000 (0:00:00.159) 0:00:48.595 ******** 2026-04-16 05:41:12.530330 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 05:41:12.530341 | orchestrator |  "lvm_report": { 2026-04-16 05:41:12.530414 | orchestrator |  "lv": [ 2026-04-16 05:41:12.530432 | orchestrator |  { 2026-04-16 05:41:12.530446 | orchestrator |  "lv_name": "osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6", 2026-04-16 05:41:12.530461 | orchestrator |  "vg_name": "ceph-280a11fd-e83f-54f4-b253-754709c5cdf6" 2026-04-16 05:41:12.530475 | orchestrator |  }, 2026-04-16 05:41:12.530487 | orchestrator |  { 2026-04-16 05:41:12.530500 | orchestrator |  "lv_name": "osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f", 2026-04-16 05:41:12.530512 | orchestrator |  "vg_name": "ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f" 2026-04-16 05:41:12.530525 | orchestrator |  } 2026-04-16 05:41:12.530537 | orchestrator |  ], 2026-04-16 05:41:12.530549 | orchestrator |  "pv": [ 2026-04-16 05:41:12.530562 | orchestrator |  { 2026-04-16 05:41:12.530574 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-16 05:41:12.530586 | orchestrator |  "vg_name": "ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f" 2026-04-16 05:41:12.530599 | orchestrator |  }, 2026-04-16 05:41:12.530612 | orchestrator |  { 2026-04-16 05:41:12.530626 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-16 05:41:12.530638 | orchestrator |  "vg_name": "ceph-280a11fd-e83f-54f4-b253-754709c5cdf6" 2026-04-16 05:41:12.530650 | orchestrator |  } 2026-04-16 05:41:12.530662 | orchestrator |  ] 2026-04-16 05:41:12.530675 | orchestrator |  } 2026-04-16 05:41:12.530688 | orchestrator | } 2026-04-16 05:41:12.530701 | orchestrator | 2026-04-16 05:41:12.530713 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-04-16 05:41:12.530726 | orchestrator | 2026-04-16 05:41:12.530738 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 05:41:12.530751 | orchestrator | Thursday 16 April 2026 05:41:06 +0000 (0:00:00.276) 0:00:48.871 ******** 2026-04-16 05:41:12.530763 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-04-16 05:41:12.530777 | orchestrator | 2026-04-16 05:41:12.530788 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-04-16 05:41:12.530799 | orchestrator | Thursday 16 April 2026 05:41:07 +0000 (0:00:00.647) 0:00:49.518 ******** 2026-04-16 05:41:12.530810 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:12.530822 | orchestrator | 2026-04-16 05:41:12.530833 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.530844 | orchestrator | Thursday 16 April 2026 05:41:07 +0000 (0:00:00.234) 0:00:49.753 ******** 2026-04-16 05:41:12.530855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-04-16 05:41:12.530866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-04-16 05:41:12.530877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-04-16 05:41:12.530888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-04-16 05:41:12.530898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-04-16 05:41:12.530909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-04-16 05:41:12.530920 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-04-16 05:41:12.530942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-04-16 05:41:12.530953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-04-16 05:41:12.530964 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-04-16 05:41:12.530975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-04-16 05:41:12.530985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-04-16 05:41:12.530996 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-04-16 05:41:12.531007 | orchestrator | 2026-04-16 05:41:12.531018 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531029 | orchestrator | Thursday 16 April 2026 05:41:08 +0000 (0:00:00.397) 0:00:50.150 ******** 2026-04-16 05:41:12.531040 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531051 | orchestrator | 2026-04-16 05:41:12.531062 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531073 | orchestrator | Thursday 16 April 2026 05:41:08 +0000 (0:00:00.203) 0:00:50.354 ******** 2026-04-16 05:41:12.531084 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531095 | orchestrator | 2026-04-16 05:41:12.531106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531137 | orchestrator | Thursday 16 April 2026 05:41:08 +0000 (0:00:00.185) 0:00:50.539 ******** 2026-04-16 05:41:12.531150 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531161 | orchestrator | 2026-04-16 05:41:12.531172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531207 | orchestrator | Thursday 16 April 2026 05:41:08 +0000 (0:00:00.189) 0:00:50.729 ******** 2026-04-16 05:41:12.531218 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531229 | orchestrator | 2026-04-16 05:41:12.531240 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531251 | orchestrator | Thursday 16 April 2026 05:41:08 +0000 (0:00:00.204) 0:00:50.934 ******** 2026-04-16 05:41:12.531263 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531274 | orchestrator | 2026-04-16 05:41:12.531285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531295 | orchestrator | Thursday 16 April 2026 05:41:08 +0000 (0:00:00.190) 0:00:51.124 ******** 2026-04-16 05:41:12.531306 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531317 | orchestrator | 2026-04-16 05:41:12.531328 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531339 | orchestrator | Thursday 16 April 2026 05:41:09 +0000 (0:00:00.195) 0:00:51.320 ******** 2026-04-16 05:41:12.531350 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531361 | orchestrator | 2026-04-16 05:41:12.531372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531382 | orchestrator | Thursday 16 April 2026 05:41:09 +0000 (0:00:00.191) 0:00:51.511 ******** 2026-04-16 05:41:12.531394 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:12.531405 | orchestrator | 2026-04-16 05:41:12.531415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531427 | orchestrator | Thursday 16 April 2026 05:41:10 +0000 (0:00:00.682) 0:00:52.193 ******** 2026-04-16 05:41:12.531437 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd) 2026-04-16 05:41:12.531449 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd) 2026-04-16 05:41:12.531460 | orchestrator | 2026-04-16 05:41:12.531471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531482 | orchestrator | Thursday 16 April 2026 05:41:10 +0000 (0:00:00.432) 0:00:52.626 ******** 2026-04-16 05:41:12.531526 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e) 2026-04-16 05:41:12.531546 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e) 2026-04-16 05:41:12.531557 | orchestrator | 2026-04-16 05:41:12.531568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531579 | orchestrator | Thursday 16 April 2026 05:41:10 +0000 (0:00:00.431) 0:00:53.057 ******** 2026-04-16 05:41:12.531590 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042) 2026-04-16 05:41:12.531601 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042) 2026-04-16 05:41:12.531612 | orchestrator | 2026-04-16 05:41:12.531623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531633 | orchestrator | Thursday 16 April 2026 05:41:11 +0000 (0:00:00.431) 0:00:53.488 ******** 2026-04-16 05:41:12.531659 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3) 2026-04-16 05:41:12.531671 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3) 2026-04-16 05:41:12.531682 | orchestrator | 2026-04-16 05:41:12.531703 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-04-16 05:41:12.531715 | orchestrator | Thursday 16 April 2026 05:41:11 +0000 (0:00:00.445) 0:00:53.934 ******** 2026-04-16 05:41:12.531725 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-04-16 05:41:12.531737 | orchestrator | 2026-04-16 05:41:12.531747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:12.531759 | orchestrator | Thursday 16 April 2026 05:41:12 +0000 (0:00:00.329) 0:00:54.264 ******** 2026-04-16 05:41:12.531769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-04-16 05:41:12.531780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-04-16 05:41:12.531791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-04-16 05:41:12.531802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-04-16 05:41:12.531812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-04-16 05:41:12.531823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-04-16 05:41:12.531834 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-04-16 05:41:12.531844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-04-16 05:41:12.531855 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-04-16 05:41:12.531865 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-04-16 05:41:12.531877 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-04-16 05:41:12.531897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-04-16 05:41:21.149372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-04-16 05:41:21.149536 | orchestrator | 2026-04-16 05:41:21.149556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149569 | orchestrator | Thursday 16 April 2026 05:41:12 +0000 (0:00:00.392) 0:00:54.656 ******** 2026-04-16 05:41:21.149581 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149594 | orchestrator | 2026-04-16 05:41:21.149605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149637 | orchestrator | Thursday 16 April 2026 05:41:12 +0000 (0:00:00.198) 0:00:54.855 ******** 2026-04-16 05:41:21.149649 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149683 | orchestrator | 2026-04-16 05:41:21.149695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149706 | orchestrator | Thursday 16 April 2026 05:41:12 +0000 (0:00:00.199) 0:00:55.055 ******** 2026-04-16 05:41:21.149717 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149728 | orchestrator | 2026-04-16 05:41:21.149738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149749 | orchestrator | Thursday 16 April 2026 05:41:13 +0000 (0:00:00.216) 0:00:55.272 ******** 2026-04-16 05:41:21.149760 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149771 | orchestrator | 2026-04-16 05:41:21.149782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149793 | orchestrator | Thursday 16 April 2026 05:41:13 +0000 (0:00:00.198) 0:00:55.470 ******** 2026-04-16 05:41:21.149803 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149814 | orchestrator | 2026-04-16 05:41:21.149828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149840 | orchestrator | Thursday 16 April 2026 05:41:13 +0000 (0:00:00.581) 0:00:56.052 ******** 2026-04-16 05:41:21.149852 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149864 | orchestrator | 2026-04-16 05:41:21.149877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149890 | orchestrator | Thursday 16 April 2026 05:41:14 +0000 (0:00:00.205) 0:00:56.257 ******** 2026-04-16 05:41:21.149902 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149914 | orchestrator | 2026-04-16 05:41:21.149927 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.149941 | orchestrator | Thursday 16 April 2026 05:41:14 +0000 (0:00:00.201) 0:00:56.458 ******** 2026-04-16 05:41:21.149961 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.149988 | orchestrator | 2026-04-16 05:41:21.150009 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.150122 | orchestrator | Thursday 16 April 2026 05:41:14 +0000 (0:00:00.200) 0:00:56.659 ******** 2026-04-16 05:41:21.150145 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-04-16 05:41:21.150158 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-04-16 05:41:21.150208 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-04-16 05:41:21.150222 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-04-16 05:41:21.150233 | orchestrator | 2026-04-16 05:41:21.150244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.150255 | orchestrator | Thursday 16 April 2026 05:41:15 +0000 (0:00:00.649) 0:00:57.308 ******** 2026-04-16 05:41:21.150265 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150276 | orchestrator | 2026-04-16 05:41:21.150286 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.150297 | orchestrator | Thursday 16 April 2026 05:41:15 +0000 (0:00:00.201) 0:00:57.509 ******** 2026-04-16 05:41:21.150308 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150318 | orchestrator | 2026-04-16 05:41:21.150329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.150340 | orchestrator | Thursday 16 April 2026 05:41:15 +0000 (0:00:00.200) 0:00:57.710 ******** 2026-04-16 05:41:21.150350 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150361 | orchestrator | 2026-04-16 05:41:21.150372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-04-16 05:41:21.150382 | orchestrator | Thursday 16 April 2026 05:41:15 +0000 (0:00:00.191) 0:00:57.902 ******** 2026-04-16 05:41:21.150393 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150404 | orchestrator | 2026-04-16 05:41:21.150415 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-04-16 05:41:21.150425 | orchestrator | Thursday 16 April 2026 05:41:15 +0000 (0:00:00.216) 0:00:58.119 ******** 2026-04-16 05:41:21.150437 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150447 | orchestrator | 2026-04-16 05:41:21.150472 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-04-16 05:41:21.150483 | orchestrator | Thursday 16 April 2026 05:41:16 +0000 (0:00:00.137) 0:00:58.256 ******** 2026-04-16 05:41:21.150502 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4d9f1eac-7172-5024-9561-d385c629a6f5'}}) 2026-04-16 05:41:21.150529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '44db58af-23ca-547e-81cd-90c78ecf63d9'}}) 2026-04-16 05:41:21.150549 | orchestrator | 2026-04-16 05:41:21.150567 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-04-16 05:41:21.150583 | orchestrator | Thursday 16 April 2026 05:41:16 +0000 (0:00:00.198) 0:00:58.455 ******** 2026-04-16 05:41:21.150618 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}) 2026-04-16 05:41:21.150641 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}) 2026-04-16 05:41:21.150661 | orchestrator | 2026-04-16 05:41:21.150679 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-04-16 05:41:21.150730 | orchestrator | Thursday 16 April 2026 05:41:18 +0000 (0:00:01.868) 0:01:00.324 ******** 2026-04-16 05:41:21.150751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:21.150765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:21.150776 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150787 | orchestrator | 2026-04-16 05:41:21.150807 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-04-16 05:41:21.150818 | orchestrator | Thursday 16 April 2026 05:41:18 +0000 (0:00:00.336) 0:01:00.660 ******** 2026-04-16 05:41:21.150829 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}) 2026-04-16 05:41:21.150840 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}) 2026-04-16 05:41:21.150850 | orchestrator | 2026-04-16 05:41:21.150861 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-04-16 05:41:21.150872 | orchestrator | Thursday 16 April 2026 05:41:19 +0000 (0:00:01.342) 0:01:02.003 ******** 2026-04-16 05:41:21.150883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:21.150894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:21.150904 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150915 | orchestrator | 2026-04-16 05:41:21.150926 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-04-16 05:41:21.150937 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.153) 0:01:02.157 ******** 2026-04-16 05:41:21.150947 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.150958 | orchestrator | 2026-04-16 05:41:21.150969 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-04-16 05:41:21.150979 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.139) 0:01:02.296 ******** 2026-04-16 05:41:21.150990 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:21.151001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:21.151022 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.151033 | orchestrator | 2026-04-16 05:41:21.151043 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-04-16 05:41:21.151054 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.149) 0:01:02.446 ******** 2026-04-16 05:41:21.151065 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.151075 | orchestrator | 2026-04-16 05:41:21.151086 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-04-16 05:41:21.151097 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.138) 0:01:02.584 ******** 2026-04-16 05:41:21.151108 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:21.151119 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:21.151129 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.151140 | orchestrator | 2026-04-16 05:41:21.151151 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-04-16 05:41:21.151162 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.145) 0:01:02.729 ******** 2026-04-16 05:41:21.151204 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.151215 | orchestrator | 2026-04-16 05:41:21.151226 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-04-16 05:41:21.151237 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.135) 0:01:02.865 ******** 2026-04-16 05:41:21.151248 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:21.151259 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:21.151270 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:21.151281 | orchestrator | 2026-04-16 05:41:21.151292 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-04-16 05:41:21.151302 | orchestrator | Thursday 16 April 2026 05:41:20 +0000 (0:00:00.145) 0:01:03.011 ******** 2026-04-16 05:41:21.151313 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:21.151324 | orchestrator | 2026-04-16 05:41:21.151335 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-04-16 05:41:21.151346 | orchestrator | Thursday 16 April 2026 05:41:21 +0000 (0:00:00.131) 0:01:03.142 ******** 2026-04-16 05:41:21.151365 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:27.259767 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:27.259884 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.259901 | orchestrator | 2026-04-16 05:41:27.259914 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-04-16 05:41:27.259926 | orchestrator | Thursday 16 April 2026 05:41:21 +0000 (0:00:00.140) 0:01:03.283 ******** 2026-04-16 05:41:27.259954 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:27.259966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:27.259977 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.259988 | orchestrator | 2026-04-16 05:41:27.259999 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-04-16 05:41:27.260010 | orchestrator | Thursday 16 April 2026 05:41:21 +0000 (0:00:00.145) 0:01:03.429 ******** 2026-04-16 05:41:27.260042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:27.260054 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:27.260065 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260075 | orchestrator | 2026-04-16 05:41:27.260086 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-04-16 05:41:27.260097 | orchestrator | Thursday 16 April 2026 05:41:21 +0000 (0:00:00.332) 0:01:03.761 ******** 2026-04-16 05:41:27.260108 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260119 | orchestrator | 2026-04-16 05:41:27.260130 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-04-16 05:41:27.260140 | orchestrator | Thursday 16 April 2026 05:41:21 +0000 (0:00:00.139) 0:01:03.901 ******** 2026-04-16 05:41:27.260151 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260191 | orchestrator | 2026-04-16 05:41:27.260212 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-04-16 05:41:27.260224 | orchestrator | Thursday 16 April 2026 05:41:21 +0000 (0:00:00.140) 0:01:04.042 ******** 2026-04-16 05:41:27.260237 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260250 | orchestrator | 2026-04-16 05:41:27.260262 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-04-16 05:41:27.260275 | orchestrator | Thursday 16 April 2026 05:41:22 +0000 (0:00:00.143) 0:01:04.185 ******** 2026-04-16 05:41:27.260287 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 05:41:27.260301 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-04-16 05:41:27.260314 | orchestrator | } 2026-04-16 05:41:27.260327 | orchestrator | 2026-04-16 05:41:27.260339 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-04-16 05:41:27.260352 | orchestrator | Thursday 16 April 2026 05:41:22 +0000 (0:00:00.145) 0:01:04.331 ******** 2026-04-16 05:41:27.260364 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 05:41:27.260377 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-04-16 05:41:27.260389 | orchestrator | } 2026-04-16 05:41:27.260401 | orchestrator | 2026-04-16 05:41:27.260414 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-04-16 05:41:27.260426 | orchestrator | Thursday 16 April 2026 05:41:22 +0000 (0:00:00.141) 0:01:04.473 ******** 2026-04-16 05:41:27.260439 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 05:41:27.260452 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-04-16 05:41:27.260464 | orchestrator | } 2026-04-16 05:41:27.260476 | orchestrator | 2026-04-16 05:41:27.260489 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-04-16 05:41:27.260501 | orchestrator | Thursday 16 April 2026 05:41:22 +0000 (0:00:00.148) 0:01:04.621 ******** 2026-04-16 05:41:27.260514 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:27.260527 | orchestrator | 2026-04-16 05:41:27.260539 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-04-16 05:41:27.260552 | orchestrator | Thursday 16 April 2026 05:41:23 +0000 (0:00:00.531) 0:01:05.153 ******** 2026-04-16 05:41:27.260564 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:27.260576 | orchestrator | 2026-04-16 05:41:27.260589 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-04-16 05:41:27.260600 | orchestrator | Thursday 16 April 2026 05:41:23 +0000 (0:00:00.514) 0:01:05.668 ******** 2026-04-16 05:41:27.260610 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:27.260621 | orchestrator | 2026-04-16 05:41:27.260632 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-04-16 05:41:27.260643 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.512) 0:01:06.180 ******** 2026-04-16 05:41:27.260654 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:27.260665 | orchestrator | 2026-04-16 05:41:27.260677 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-04-16 05:41:27.260706 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.140) 0:01:06.321 ******** 2026-04-16 05:41:27.260718 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260728 | orchestrator | 2026-04-16 05:41:27.260739 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-04-16 05:41:27.260750 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.106) 0:01:06.427 ******** 2026-04-16 05:41:27.260761 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260771 | orchestrator | 2026-04-16 05:41:27.260782 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-04-16 05:41:27.260793 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.283) 0:01:06.710 ******** 2026-04-16 05:41:27.260803 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 05:41:27.260814 | orchestrator |  "vgs_report": { 2026-04-16 05:41:27.260825 | orchestrator |  "vg": [] 2026-04-16 05:41:27.260853 | orchestrator |  } 2026-04-16 05:41:27.260865 | orchestrator | } 2026-04-16 05:41:27.260876 | orchestrator | 2026-04-16 05:41:27.260887 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-04-16 05:41:27.260898 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.145) 0:01:06.855 ******** 2026-04-16 05:41:27.260908 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260919 | orchestrator | 2026-04-16 05:41:27.260930 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-04-16 05:41:27.260940 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.133) 0:01:06.989 ******** 2026-04-16 05:41:27.260957 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.260969 | orchestrator | 2026-04-16 05:41:27.260979 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-04-16 05:41:27.260990 | orchestrator | Thursday 16 April 2026 05:41:24 +0000 (0:00:00.136) 0:01:07.126 ******** 2026-04-16 05:41:27.261001 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261012 | orchestrator | 2026-04-16 05:41:27.261023 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-04-16 05:41:27.261034 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.136) 0:01:07.263 ******** 2026-04-16 05:41:27.261044 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261055 | orchestrator | 2026-04-16 05:41:27.261066 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-04-16 05:41:27.261077 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.132) 0:01:07.395 ******** 2026-04-16 05:41:27.261087 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261098 | orchestrator | 2026-04-16 05:41:27.261109 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-04-16 05:41:27.261120 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.136) 0:01:07.531 ******** 2026-04-16 05:41:27.261130 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261141 | orchestrator | 2026-04-16 05:41:27.261152 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-04-16 05:41:27.261193 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.134) 0:01:07.666 ******** 2026-04-16 05:41:27.261205 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261216 | orchestrator | 2026-04-16 05:41:27.261227 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-04-16 05:41:27.261237 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.129) 0:01:07.795 ******** 2026-04-16 05:41:27.261248 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261259 | orchestrator | 2026-04-16 05:41:27.261270 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-04-16 05:41:27.261281 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.134) 0:01:07.930 ******** 2026-04-16 05:41:27.261292 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261302 | orchestrator | 2026-04-16 05:41:27.261313 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-04-16 05:41:27.261324 | orchestrator | Thursday 16 April 2026 05:41:25 +0000 (0:00:00.132) 0:01:08.063 ******** 2026-04-16 05:41:27.261343 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261353 | orchestrator | 2026-04-16 05:41:27.261364 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-04-16 05:41:27.261375 | orchestrator | Thursday 16 April 2026 05:41:26 +0000 (0:00:00.136) 0:01:08.200 ******** 2026-04-16 05:41:27.261386 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261397 | orchestrator | 2026-04-16 05:41:27.261408 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-04-16 05:41:27.261419 | orchestrator | Thursday 16 April 2026 05:41:26 +0000 (0:00:00.311) 0:01:08.511 ******** 2026-04-16 05:41:27.261429 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261440 | orchestrator | 2026-04-16 05:41:27.261451 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-04-16 05:41:27.261462 | orchestrator | Thursday 16 April 2026 05:41:26 +0000 (0:00:00.144) 0:01:08.656 ******** 2026-04-16 05:41:27.261472 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261483 | orchestrator | 2026-04-16 05:41:27.261494 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-04-16 05:41:27.261504 | orchestrator | Thursday 16 April 2026 05:41:26 +0000 (0:00:00.135) 0:01:08.791 ******** 2026-04-16 05:41:27.261515 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261526 | orchestrator | 2026-04-16 05:41:27.261537 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-04-16 05:41:27.261547 | orchestrator | Thursday 16 April 2026 05:41:26 +0000 (0:00:00.152) 0:01:08.943 ******** 2026-04-16 05:41:27.261559 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:27.261570 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:27.261581 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261592 | orchestrator | 2026-04-16 05:41:27.261603 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-04-16 05:41:27.261613 | orchestrator | Thursday 16 April 2026 05:41:26 +0000 (0:00:00.155) 0:01:09.098 ******** 2026-04-16 05:41:27.261624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:27.261635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:27.261646 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:27.261656 | orchestrator | 2026-04-16 05:41:27.261667 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-04-16 05:41:27.261678 | orchestrator | Thursday 16 April 2026 05:41:27 +0000 (0:00:00.150) 0:01:09.249 ******** 2026-04-16 05:41:27.261697 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.229437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.229530 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.229543 | orchestrator | 2026-04-16 05:41:30.229568 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-04-16 05:41:30.229578 | orchestrator | Thursday 16 April 2026 05:41:27 +0000 (0:00:00.144) 0:01:09.394 ******** 2026-04-16 05:41:30.229587 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.229597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.229627 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.229637 | orchestrator | 2026-04-16 05:41:30.229645 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-04-16 05:41:30.229654 | orchestrator | Thursday 16 April 2026 05:41:27 +0000 (0:00:00.149) 0:01:09.544 ******** 2026-04-16 05:41:30.229663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.229672 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.229680 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.229689 | orchestrator | 2026-04-16 05:41:30.229697 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-04-16 05:41:30.229706 | orchestrator | Thursday 16 April 2026 05:41:27 +0000 (0:00:00.158) 0:01:09.703 ******** 2026-04-16 05:41:30.229714 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.229723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.229732 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.229740 | orchestrator | 2026-04-16 05:41:30.229748 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-04-16 05:41:30.229757 | orchestrator | Thursday 16 April 2026 05:41:27 +0000 (0:00:00.140) 0:01:09.843 ******** 2026-04-16 05:41:30.229765 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.229774 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.229783 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.229791 | orchestrator | 2026-04-16 05:41:30.229800 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-04-16 05:41:30.229808 | orchestrator | Thursday 16 April 2026 05:41:27 +0000 (0:00:00.148) 0:01:09.991 ******** 2026-04-16 05:41:30.229816 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.229825 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.229834 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.229842 | orchestrator | 2026-04-16 05:41:30.229851 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-04-16 05:41:30.229860 | orchestrator | Thursday 16 April 2026 05:41:28 +0000 (0:00:00.155) 0:01:10.147 ******** 2026-04-16 05:41:30.229868 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:30.229877 | orchestrator | 2026-04-16 05:41:30.229886 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-04-16 05:41:30.229894 | orchestrator | Thursday 16 April 2026 05:41:28 +0000 (0:00:00.731) 0:01:10.879 ******** 2026-04-16 05:41:30.229903 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:30.229911 | orchestrator | 2026-04-16 05:41:30.229926 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-04-16 05:41:30.229943 | orchestrator | Thursday 16 April 2026 05:41:29 +0000 (0:00:00.509) 0:01:11.389 ******** 2026-04-16 05:41:30.229957 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:30.229972 | orchestrator | 2026-04-16 05:41:30.229986 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-04-16 05:41:30.230000 | orchestrator | Thursday 16 April 2026 05:41:29 +0000 (0:00:00.147) 0:01:11.536 ******** 2026-04-16 05:41:30.230092 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'vg_name': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}) 2026-04-16 05:41:30.230113 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'vg_name': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}) 2026-04-16 05:41:30.230127 | orchestrator | 2026-04-16 05:41:30.230142 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-04-16 05:41:30.230156 | orchestrator | Thursday 16 April 2026 05:41:29 +0000 (0:00:00.170) 0:01:11.707 ******** 2026-04-16 05:41:30.230208 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.230232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.230247 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.230261 | orchestrator | 2026-04-16 05:41:30.230275 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-04-16 05:41:30.230290 | orchestrator | Thursday 16 April 2026 05:41:29 +0000 (0:00:00.160) 0:01:11.868 ******** 2026-04-16 05:41:30.230305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.230320 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.230334 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.230349 | orchestrator | 2026-04-16 05:41:30.230362 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-04-16 05:41:30.230378 | orchestrator | Thursday 16 April 2026 05:41:29 +0000 (0:00:00.172) 0:01:12.040 ******** 2026-04-16 05:41:30.230387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 05:41:30.230396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 05:41:30.230404 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:30.230413 | orchestrator | 2026-04-16 05:41:30.230421 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-04-16 05:41:30.230430 | orchestrator | Thursday 16 April 2026 05:41:30 +0000 (0:00:00.153) 0:01:12.194 ******** 2026-04-16 05:41:30.230438 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 05:41:30.230447 | orchestrator |  "lvm_report": { 2026-04-16 05:41:30.230456 | orchestrator |  "lv": [ 2026-04-16 05:41:30.230464 | orchestrator |  { 2026-04-16 05:41:30.230472 | orchestrator |  "lv_name": "osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9", 2026-04-16 05:41:30.230482 | orchestrator |  "vg_name": "ceph-44db58af-23ca-547e-81cd-90c78ecf63d9" 2026-04-16 05:41:30.230490 | orchestrator |  }, 2026-04-16 05:41:30.230498 | orchestrator |  { 2026-04-16 05:41:30.230507 | orchestrator |  "lv_name": "osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5", 2026-04-16 05:41:30.230515 | orchestrator |  "vg_name": "ceph-4d9f1eac-7172-5024-9561-d385c629a6f5" 2026-04-16 05:41:30.230524 | orchestrator |  } 2026-04-16 05:41:30.230532 | orchestrator |  ], 2026-04-16 05:41:30.230541 | orchestrator |  "pv": [ 2026-04-16 05:41:30.230549 | orchestrator |  { 2026-04-16 05:41:30.230557 | orchestrator |  "pv_name": "/dev/sdb", 2026-04-16 05:41:30.230566 | orchestrator |  "vg_name": "ceph-4d9f1eac-7172-5024-9561-d385c629a6f5" 2026-04-16 05:41:30.230574 | orchestrator |  }, 2026-04-16 05:41:30.230583 | orchestrator |  { 2026-04-16 05:41:30.230591 | orchestrator |  "pv_name": "/dev/sdc", 2026-04-16 05:41:30.230611 | orchestrator |  "vg_name": "ceph-44db58af-23ca-547e-81cd-90c78ecf63d9" 2026-04-16 05:41:30.230620 | orchestrator |  } 2026-04-16 05:41:30.230628 | orchestrator |  ] 2026-04-16 05:41:30.230636 | orchestrator |  } 2026-04-16 05:41:30.230645 | orchestrator | } 2026-04-16 05:41:30.230654 | orchestrator | 2026-04-16 05:41:30.230662 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:41:30.230671 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-16 05:41:30.230679 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-16 05:41:30.230688 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-04-16 05:41:30.230697 | orchestrator | 2026-04-16 05:41:30.230705 | orchestrator | 2026-04-16 05:41:30.230713 | orchestrator | 2026-04-16 05:41:30.230722 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:41:30.230730 | orchestrator | Thursday 16 April 2026 05:41:30 +0000 (0:00:00.149) 0:01:12.344 ******** 2026-04-16 05:41:30.230739 | orchestrator | =============================================================================== 2026-04-16 05:41:30.230747 | orchestrator | Create block VGs -------------------------------------------------------- 5.69s 2026-04-16 05:41:30.230756 | orchestrator | Create block LVs -------------------------------------------------------- 4.21s 2026-04-16 05:41:30.230764 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-04-16 05:41:30.230772 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.76s 2026-04-16 05:41:30.230782 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.65s 2026-04-16 05:41:30.230797 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2026-04-16 05:41:30.230811 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.57s 2026-04-16 05:41:30.230827 | orchestrator | Add known links to the list of available block devices ------------------ 1.31s 2026-04-16 05:41:30.230850 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2026-04-16 05:41:30.585607 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.14s 2026-04-16 05:41:30.585706 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2026-04-16 05:41:30.585719 | orchestrator | Add known links to the list of available block devices ------------------ 0.80s 2026-04-16 05:41:30.585749 | orchestrator | Get initial list of available block devices ----------------------------- 0.72s 2026-04-16 05:41:30.585759 | orchestrator | Print LVM report data --------------------------------------------------- 0.72s 2026-04-16 05:41:30.585769 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.70s 2026-04-16 05:41:30.585778 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.69s 2026-04-16 05:41:30.585788 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-04-16 05:41:30.585797 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-04-16 05:41:30.585807 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.67s 2026-04-16 05:41:30.585817 | orchestrator | Print 'Create block VGs' ------------------------------------------------ 0.65s 2026-04-16 05:41:42.831299 | orchestrator | 2026-04-16 05:41:42 | INFO  | Task 4d5d56a2-4156-4eb0-a29b-8dadc69dc2a9 (facts) was prepared for execution. 2026-04-16 05:41:42.831420 | orchestrator | 2026-04-16 05:41:42 | INFO  | It takes a moment until task 4d5d56a2-4156-4eb0-a29b-8dadc69dc2a9 (facts) has been started and output is visible here. 2026-04-16 05:41:55.628008 | orchestrator | 2026-04-16 05:41:55.628124 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-16 05:41:55.628233 | orchestrator | 2026-04-16 05:41:55.628249 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-16 05:41:55.628260 | orchestrator | Thursday 16 April 2026 05:41:46 +0000 (0:00:00.254) 0:00:00.254 ******** 2026-04-16 05:41:55.628272 | orchestrator | ok: [testbed-manager] 2026-04-16 05:41:55.628284 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:41:55.628294 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:41:55.628305 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:41:55.628316 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:41:55.628326 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:55.628337 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:55.628347 | orchestrator | 2026-04-16 05:41:55.628358 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-16 05:41:55.628369 | orchestrator | Thursday 16 April 2026 05:41:48 +0000 (0:00:01.168) 0:00:01.423 ******** 2026-04-16 05:41:55.628380 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:41:55.628392 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:41:55.628403 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:41:55.628413 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:41:55.628424 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:41:55.628435 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:55.628445 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:55.628456 | orchestrator | 2026-04-16 05:41:55.628466 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 05:41:55.628477 | orchestrator | 2026-04-16 05:41:55.628488 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 05:41:55.628499 | orchestrator | Thursday 16 April 2026 05:41:49 +0000 (0:00:01.259) 0:00:02.682 ******** 2026-04-16 05:41:55.628510 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:41:55.628520 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:41:55.628531 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:41:55.628542 | orchestrator | ok: [testbed-manager] 2026-04-16 05:41:55.628555 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:41:55.628567 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:41:55.628580 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:41:55.628592 | orchestrator | 2026-04-16 05:41:55.628605 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-16 05:41:55.628616 | orchestrator | 2026-04-16 05:41:55.628627 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-16 05:41:55.628638 | orchestrator | Thursday 16 April 2026 05:41:54 +0000 (0:00:05.423) 0:00:08.106 ******** 2026-04-16 05:41:55.628649 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:41:55.628660 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:41:55.628671 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:41:55.628681 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:41:55.628692 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:41:55.628702 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:41:55.628714 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:41:55.628732 | orchestrator | 2026-04-16 05:41:55.628750 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:41:55.628768 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628789 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628808 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628821 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628832 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628851 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628862 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:41:55.628873 | orchestrator | 2026-04-16 05:41:55.628884 | orchestrator | 2026-04-16 05:41:55.628894 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:41:55.628919 | orchestrator | Thursday 16 April 2026 05:41:55 +0000 (0:00:00.502) 0:00:08.609 ******** 2026-04-16 05:41:55.628930 | orchestrator | =============================================================================== 2026-04-16 05:41:55.628941 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.42s 2026-04-16 05:41:55.628951 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2026-04-16 05:41:55.628962 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-04-16 05:41:55.628972 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-04-16 05:41:57.934477 | orchestrator | 2026-04-16 05:41:57 | INFO  | Task e724e97c-2159-4ebf-9710-a3f93215312d (ceph) was prepared for execution. 2026-04-16 05:41:57.934563 | orchestrator | 2026-04-16 05:41:57 | INFO  | It takes a moment until task e724e97c-2159-4ebf-9710-a3f93215312d (ceph) has been started and output is visible here. 2026-04-16 05:42:13.873325 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-16 05:42:13.873443 | orchestrator | 2.16.14 2026-04-16 05:42:13.873460 | orchestrator | 2026-04-16 05:42:13.873472 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-04-16 05:42:13.873485 | orchestrator | 2026-04-16 05:42:13.873496 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 05:42:13.873508 | orchestrator | Thursday 16 April 2026 05:42:02 +0000 (0:00:00.565) 0:00:00.565 ******** 2026-04-16 05:42:13.873520 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:42:13.873532 | orchestrator | 2026-04-16 05:42:13.873543 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 05:42:13.873562 | orchestrator | Thursday 16 April 2026 05:42:03 +0000 (0:00:00.960) 0:00:01.526 ******** 2026-04-16 05:42:13.873581 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.873601 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.873627 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.873649 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.873667 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.873686 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.873705 | orchestrator | 2026-04-16 05:42:13.873724 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 05:42:13.873742 | orchestrator | Thursday 16 April 2026 05:42:04 +0000 (0:00:01.208) 0:00:02.734 ******** 2026-04-16 05:42:13.873761 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.873781 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.873800 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.873820 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.873839 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.873858 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.873877 | orchestrator | 2026-04-16 05:42:13.873898 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 05:42:13.873919 | orchestrator | Thursday 16 April 2026 05:42:04 +0000 (0:00:00.612) 0:00:03.347 ******** 2026-04-16 05:42:13.873940 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.873975 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.873989 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.874002 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.874104 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.874154 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.874170 | orchestrator | 2026-04-16 05:42:13.874181 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 05:42:13.874193 | orchestrator | Thursday 16 April 2026 05:42:05 +0000 (0:00:00.800) 0:00:04.148 ******** 2026-04-16 05:42:13.874204 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.874214 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.874225 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.874235 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.874246 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.874257 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.874267 | orchestrator | 2026-04-16 05:42:13.874278 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 05:42:13.874289 | orchestrator | Thursday 16 April 2026 05:42:06 +0000 (0:00:00.636) 0:00:04.785 ******** 2026-04-16 05:42:13.874299 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.874310 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.874320 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.874331 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.874341 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.874352 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.874363 | orchestrator | 2026-04-16 05:42:13.874373 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 05:42:13.874384 | orchestrator | Thursday 16 April 2026 05:42:06 +0000 (0:00:00.544) 0:00:05.329 ******** 2026-04-16 05:42:13.874395 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.874406 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.874416 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.874427 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.874437 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.874447 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.874458 | orchestrator | 2026-04-16 05:42:13.874469 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 05:42:13.874480 | orchestrator | Thursday 16 April 2026 05:42:07 +0000 (0:00:00.734) 0:00:06.063 ******** 2026-04-16 05:42:13.874491 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:13.874503 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:13.874513 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:13.874524 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:13.874535 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:13.874546 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:13.874557 | orchestrator | 2026-04-16 05:42:13.874567 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 05:42:13.874579 | orchestrator | Thursday 16 April 2026 05:42:08 +0000 (0:00:00.551) 0:00:06.615 ******** 2026-04-16 05:42:13.874589 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.874600 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.874610 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.874621 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.874631 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.874657 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.874669 | orchestrator | 2026-04-16 05:42:13.874680 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 05:42:13.874690 | orchestrator | Thursday 16 April 2026 05:42:08 +0000 (0:00:00.711) 0:00:07.327 ******** 2026-04-16 05:42:13.874701 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:42:13.874712 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:42:13.874723 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:42:13.874733 | orchestrator | 2026-04-16 05:42:13.874744 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 05:42:13.874755 | orchestrator | Thursday 16 April 2026 05:42:09 +0000 (0:00:00.619) 0:00:07.946 ******** 2026-04-16 05:42:13.874774 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:13.874785 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:13.874798 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:13.874844 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:13.874863 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:13.874881 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:13.874900 | orchestrator | 2026-04-16 05:42:13.874917 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 05:42:13.874936 | orchestrator | Thursday 16 April 2026 05:42:10 +0000 (0:00:00.766) 0:00:08.713 ******** 2026-04-16 05:42:13.874955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:42:13.874974 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:42:13.874994 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:42:13.875012 | orchestrator | 2026-04-16 05:42:13.875030 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 05:42:13.875050 | orchestrator | Thursday 16 April 2026 05:42:12 +0000 (0:00:02.203) 0:00:10.916 ******** 2026-04-16 05:42:13.875069 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 05:42:13.875088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 05:42:13.875103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 05:42:13.875114 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:13.875145 | orchestrator | 2026-04-16 05:42:13.875157 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 05:42:13.875168 | orchestrator | Thursday 16 April 2026 05:42:12 +0000 (0:00:00.392) 0:00:11.309 ******** 2026-04-16 05:42:13.875181 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 05:42:13.875196 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 05:42:13.875207 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 05:42:13.875218 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:13.875229 | orchestrator | 2026-04-16 05:42:13.875241 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 05:42:13.875252 | orchestrator | Thursday 16 April 2026 05:42:13 +0000 (0:00:00.582) 0:00:11.892 ******** 2026-04-16 05:42:13.875265 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:13.875279 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:13.875290 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:13.875311 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:13.875323 | orchestrator | 2026-04-16 05:42:13.875341 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 05:42:13.875352 | orchestrator | Thursday 16 April 2026 05:42:13 +0000 (0:00:00.158) 0:00:12.050 ******** 2026-04-16 05:42:13.875377 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 05:42:11.159209', 'end': '2026-04-16 05:42:11.210375', 'delta': '0:00:00.051166', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 05:42:22.691099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 05:42:11.689213', 'end': '2026-04-16 05:42:11.733888', 'delta': '0:00:00.044675', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 05:42:22.691235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 05:42:12.204856', 'end': '2026-04-16 05:42:12.251665', 'delta': '0:00:00.046809', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 05:42:22.691255 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691269 | orchestrator | 2026-04-16 05:42:22.691281 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 05:42:22.691294 | orchestrator | Thursday 16 April 2026 05:42:13 +0000 (0:00:00.169) 0:00:12.220 ******** 2026-04-16 05:42:22.691317 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:22.691330 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:22.691340 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:22.691351 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:22.691362 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:22.691372 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:22.691383 | orchestrator | 2026-04-16 05:42:22.691394 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 05:42:22.691405 | orchestrator | Thursday 16 April 2026 05:42:14 +0000 (0:00:00.683) 0:00:12.903 ******** 2026-04-16 05:42:22.691416 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:42:22.691427 | orchestrator | 2026-04-16 05:42:22.691438 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 05:42:22.691449 | orchestrator | Thursday 16 April 2026 05:42:15 +0000 (0:00:00.820) 0:00:13.724 ******** 2026-04-16 05:42:22.691484 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691496 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.691506 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.691517 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.691528 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.691538 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.691549 | orchestrator | 2026-04-16 05:42:22.691560 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 05:42:22.691571 | orchestrator | Thursday 16 April 2026 05:42:16 +0000 (0:00:00.756) 0:00:14.481 ******** 2026-04-16 05:42:22.691582 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691592 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.691603 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.691615 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.691628 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.691640 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.691652 | orchestrator | 2026-04-16 05:42:22.691665 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 05:42:22.691677 | orchestrator | Thursday 16 April 2026 05:42:17 +0000 (0:00:01.025) 0:00:15.507 ******** 2026-04-16 05:42:22.691690 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691703 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.691716 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.691728 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.691740 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.691766 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.691779 | orchestrator | 2026-04-16 05:42:22.691791 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 05:42:22.691803 | orchestrator | Thursday 16 April 2026 05:42:17 +0000 (0:00:00.532) 0:00:16.040 ******** 2026-04-16 05:42:22.691815 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691827 | orchestrator | 2026-04-16 05:42:22.691840 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 05:42:22.691852 | orchestrator | Thursday 16 April 2026 05:42:17 +0000 (0:00:00.098) 0:00:16.138 ******** 2026-04-16 05:42:22.691865 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691878 | orchestrator | 2026-04-16 05:42:22.691890 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 05:42:22.691902 | orchestrator | Thursday 16 April 2026 05:42:17 +0000 (0:00:00.212) 0:00:16.351 ******** 2026-04-16 05:42:22.691915 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.691927 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.691939 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.691952 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.691964 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.691976 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.691986 | orchestrator | 2026-04-16 05:42:22.692015 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 05:42:22.692027 | orchestrator | Thursday 16 April 2026 05:42:18 +0000 (0:00:00.714) 0:00:17.065 ******** 2026-04-16 05:42:22.692037 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.692048 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.692059 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.692070 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.692080 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.692091 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.692101 | orchestrator | 2026-04-16 05:42:22.692163 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 05:42:22.692175 | orchestrator | Thursday 16 April 2026 05:42:19 +0000 (0:00:00.557) 0:00:17.622 ******** 2026-04-16 05:42:22.692186 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.692197 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.692207 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.692227 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.692238 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.692248 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.692259 | orchestrator | 2026-04-16 05:42:22.692269 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 05:42:22.692280 | orchestrator | Thursday 16 April 2026 05:42:19 +0000 (0:00:00.712) 0:00:18.334 ******** 2026-04-16 05:42:22.692291 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.692302 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.692312 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.692323 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.692333 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.692344 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.692354 | orchestrator | 2026-04-16 05:42:22.692365 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 05:42:22.692376 | orchestrator | Thursday 16 April 2026 05:42:20 +0000 (0:00:00.595) 0:00:18.929 ******** 2026-04-16 05:42:22.692387 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.692397 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.692408 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.692418 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.692429 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.692439 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.692450 | orchestrator | 2026-04-16 05:42:22.692461 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 05:42:22.692472 | orchestrator | Thursday 16 April 2026 05:42:21 +0000 (0:00:00.700) 0:00:19.630 ******** 2026-04-16 05:42:22.692482 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.692493 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.692503 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.692514 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.692524 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.692535 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.692545 | orchestrator | 2026-04-16 05:42:22.692556 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 05:42:22.692568 | orchestrator | Thursday 16 April 2026 05:42:21 +0000 (0:00:00.589) 0:00:20.220 ******** 2026-04-16 05:42:22.692579 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:22.692589 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:22.692600 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:22.692610 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:22.692621 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:22.692631 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:22.692642 | orchestrator | 2026-04-16 05:42:22.692653 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 05:42:22.692663 | orchestrator | Thursday 16 April 2026 05:42:22 +0000 (0:00:00.715) 0:00:20.936 ******** 2026-04-16 05:42:22.692676 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.692697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.692724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810888 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.810936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.810964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.810999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.811029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.892899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.892979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.892992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.893002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.893012 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.893056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.893065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.893073 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:22.893099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.893139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.893171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.893186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:22.893232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.080228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080500 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:23.080513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.080534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.080557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.080577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.254593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.254689 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.254729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.254885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.254898 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:23.254911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.254940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.461976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.462415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.462448 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:23.462468 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:23.462485 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:23.462506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.462668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:42:23.664317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.664448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:42:23.664478 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:23.664498 | orchestrator | 2026-04-16 05:42:23.664518 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 05:42:23.664537 | orchestrator | Thursday 16 April 2026 05:42:23 +0000 (0:00:00.878) 0:00:21.814 ******** 2026-04-16 05:42:23.664560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664663 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.664788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.938915 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939064 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939073 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:23.939083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.120799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.120917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.120933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.120946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.120979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.120991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.121021 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.121044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.121068 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.121090 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200691 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:24.200809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200934 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200954 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200968 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:24.200975 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.200996 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278402 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278488 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278499 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278509 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278544 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278567 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278576 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278615 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278635 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278648 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.278664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416210 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416327 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416381 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:24.416405 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416447 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416460 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416471 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416481 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416498 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416516 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416526 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.416546 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621182 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621290 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:24.621308 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:24.621323 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621336 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621347 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621359 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621371 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621433 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621447 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621459 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621476 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:24.621556 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:42:35.157705 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.157862 | orchestrator | 2026-04-16 05:42:35.157880 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 05:42:35.157895 | orchestrator | Thursday 16 April 2026 05:42:24 +0000 (0:00:01.157) 0:00:22.971 ******** 2026-04-16 05:42:35.157907 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:35.157919 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:35.157929 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:35.157940 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:35.157951 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:35.157962 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:35.157972 | orchestrator | 2026-04-16 05:42:35.157983 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 05:42:35.157994 | orchestrator | Thursday 16 April 2026 05:42:25 +0000 (0:00:00.917) 0:00:23.889 ******** 2026-04-16 05:42:35.158005 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:35.158086 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:35.158130 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:35.158143 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:35.158154 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:35.158165 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:35.158176 | orchestrator | 2026-04-16 05:42:35.158188 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 05:42:35.158199 | orchestrator | Thursday 16 April 2026 05:42:26 +0000 (0:00:00.712) 0:00:24.601 ******** 2026-04-16 05:42:35.158213 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.158227 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.158240 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.158252 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:35.158265 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:35.158277 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.158289 | orchestrator | 2026-04-16 05:42:35.158302 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 05:42:35.158316 | orchestrator | Thursday 16 April 2026 05:42:26 +0000 (0:00:00.527) 0:00:25.128 ******** 2026-04-16 05:42:35.158329 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.158341 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.158357 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.158376 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:35.158395 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:35.158413 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.158431 | orchestrator | 2026-04-16 05:42:35.158451 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 05:42:35.158474 | orchestrator | Thursday 16 April 2026 05:42:27 +0000 (0:00:00.722) 0:00:25.851 ******** 2026-04-16 05:42:35.158494 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.158509 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.158522 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.158569 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:35.158581 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:35.158591 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.158602 | orchestrator | 2026-04-16 05:42:35.158613 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 05:42:35.158624 | orchestrator | Thursday 16 April 2026 05:42:28 +0000 (0:00:00.595) 0:00:26.447 ******** 2026-04-16 05:42:35.158635 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.158646 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.158656 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.158667 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:35.158678 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:35.158688 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.158699 | orchestrator | 2026-04-16 05:42:35.158710 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 05:42:35.158720 | orchestrator | Thursday 16 April 2026 05:42:28 +0000 (0:00:00.734) 0:00:27.182 ******** 2026-04-16 05:42:35.158731 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-16 05:42:35.158743 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-16 05:42:35.158754 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-16 05:42:35.158764 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-16 05:42:35.158775 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-16 05:42:35.158786 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-16 05:42:35.158797 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-16 05:42:35.158807 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 05:42:35.158818 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-16 05:42:35.158828 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-16 05:42:35.158839 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-16 05:42:35.158850 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 05:42:35.158861 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 05:42:35.158872 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-16 05:42:35.158882 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 05:42:35.158893 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-16 05:42:35.158903 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-16 05:42:35.158931 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 05:42:35.158942 | orchestrator | 2026-04-16 05:42:35.158954 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 05:42:35.158964 | orchestrator | Thursday 16 April 2026 05:42:30 +0000 (0:00:01.548) 0:00:28.730 ******** 2026-04-16 05:42:35.158975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 05:42:35.158987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 05:42:35.158997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 05:42:35.159008 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.159019 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 05:42:35.159030 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 05:42:35.159041 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 05:42:35.159072 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.159084 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 05:42:35.159094 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 05:42:35.159136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 05:42:35.159150 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.159161 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 05:42:35.159171 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 05:42:35.159192 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 05:42:35.159209 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:35.159235 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 05:42:35.159256 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 05:42:35.159273 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 05:42:35.159289 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:35.159306 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 05:42:35.159322 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 05:42:35.159338 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 05:42:35.159355 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.159372 | orchestrator | 2026-04-16 05:42:35.159390 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 05:42:35.159408 | orchestrator | Thursday 16 April 2026 05:42:31 +0000 (0:00:00.812) 0:00:29.542 ******** 2026-04-16 05:42:35.159426 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:35.159444 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:35.159463 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:35.159483 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:42:35.159502 | orchestrator | 2026-04-16 05:42:35.159514 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 05:42:35.159526 | orchestrator | Thursday 16 April 2026 05:42:32 +0000 (0:00:00.929) 0:00:30.472 ******** 2026-04-16 05:42:35.159537 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.159548 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.159559 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.159569 | orchestrator | 2026-04-16 05:42:35.159580 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 05:42:35.159591 | orchestrator | Thursday 16 April 2026 05:42:32 +0000 (0:00:00.322) 0:00:30.794 ******** 2026-04-16 05:42:35.159601 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.159612 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.159623 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.159633 | orchestrator | 2026-04-16 05:42:35.159644 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 05:42:35.159655 | orchestrator | Thursday 16 April 2026 05:42:32 +0000 (0:00:00.303) 0:00:31.098 ******** 2026-04-16 05:42:35.159665 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.159676 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:35.159687 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:35.159697 | orchestrator | 2026-04-16 05:42:35.159708 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 05:42:35.159719 | orchestrator | Thursday 16 April 2026 05:42:33 +0000 (0:00:00.327) 0:00:31.425 ******** 2026-04-16 05:42:35.159729 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:35.159740 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:35.159751 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:35.159761 | orchestrator | 2026-04-16 05:42:35.159772 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 05:42:35.159783 | orchestrator | Thursday 16 April 2026 05:42:33 +0000 (0:00:00.629) 0:00:32.055 ******** 2026-04-16 05:42:35.159793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:42:35.159804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:42:35.159814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:42:35.159825 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.159836 | orchestrator | 2026-04-16 05:42:35.159846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 05:42:35.159857 | orchestrator | Thursday 16 April 2026 05:42:34 +0000 (0:00:00.372) 0:00:32.427 ******** 2026-04-16 05:42:35.159879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:42:35.159890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:42:35.159900 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:42:35.159911 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.159921 | orchestrator | 2026-04-16 05:42:35.159932 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 05:42:35.159942 | orchestrator | Thursday 16 April 2026 05:42:34 +0000 (0:00:00.382) 0:00:32.810 ******** 2026-04-16 05:42:35.159961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:42:35.159972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:42:35.159983 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:42:35.159994 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:35.160004 | orchestrator | 2026-04-16 05:42:35.160015 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 05:42:35.160026 | orchestrator | Thursday 16 April 2026 05:42:34 +0000 (0:00:00.368) 0:00:33.178 ******** 2026-04-16 05:42:35.160037 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:35.160047 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:35.160058 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:35.160068 | orchestrator | 2026-04-16 05:42:35.160079 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 05:42:35.160159 | orchestrator | Thursday 16 April 2026 05:42:35 +0000 (0:00:00.325) 0:00:33.504 ******** 2026-04-16 05:42:53.258607 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 05:42:53.258751 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 05:42:53.258768 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 05:42:53.258781 | orchestrator | 2026-04-16 05:42:53.258794 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 05:42:53.258807 | orchestrator | Thursday 16 April 2026 05:42:35 +0000 (0:00:00.713) 0:00:34.218 ******** 2026-04-16 05:42:53.258819 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:42:53.258831 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:42:53.258842 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:42:53.258854 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 05:42:53.258866 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 05:42:53.258877 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 05:42:53.258887 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 05:42:53.258898 | orchestrator | 2026-04-16 05:42:53.258910 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 05:42:53.258920 | orchestrator | Thursday 16 April 2026 05:42:36 +0000 (0:00:01.095) 0:00:35.313 ******** 2026-04-16 05:42:53.258931 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:42:53.258942 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:42:53.258953 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:42:53.258964 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 05:42:53.258975 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 05:42:53.258986 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 05:42:53.258997 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 05:42:53.259007 | orchestrator | 2026-04-16 05:42:53.259018 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:42:53.259055 | orchestrator | Thursday 16 April 2026 05:42:38 +0000 (0:00:01.820) 0:00:37.134 ******** 2026-04-16 05:42:53.259067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:42:53.259079 | orchestrator | 2026-04-16 05:42:53.259118 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:42:53.259131 | orchestrator | Thursday 16 April 2026 05:42:39 +0000 (0:00:01.149) 0:00:38.283 ******** 2026-04-16 05:42:53.259144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:42:53.259157 | orchestrator | 2026-04-16 05:42:53.259169 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:42:53.259182 | orchestrator | Thursday 16 April 2026 05:42:41 +0000 (0:00:01.185) 0:00:39.469 ******** 2026-04-16 05:42:53.259195 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.259208 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.259220 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.259232 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:53.259244 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:53.259256 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:53.259269 | orchestrator | 2026-04-16 05:42:53.259281 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:42:53.259294 | orchestrator | Thursday 16 April 2026 05:42:42 +0000 (0:00:01.209) 0:00:40.678 ******** 2026-04-16 05:42:53.259306 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.259318 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.259330 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.259342 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.259355 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.259367 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.259379 | orchestrator | 2026-04-16 05:42:53.259391 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:42:53.259404 | orchestrator | Thursday 16 April 2026 05:42:43 +0000 (0:00:00.688) 0:00:41.367 ******** 2026-04-16 05:42:53.259416 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.259429 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.259442 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.259454 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.259465 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.259476 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.259486 | orchestrator | 2026-04-16 05:42:53.259513 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:42:53.259527 | orchestrator | Thursday 16 April 2026 05:42:43 +0000 (0:00:00.775) 0:00:42.142 ******** 2026-04-16 05:42:53.259545 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.259564 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.259582 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.259599 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.259615 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.259632 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.259650 | orchestrator | 2026-04-16 05:42:53.259670 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:42:53.259691 | orchestrator | Thursday 16 April 2026 05:42:44 +0000 (0:00:00.681) 0:00:42.824 ******** 2026-04-16 05:42:53.259712 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.259731 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.259769 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.259780 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:53.259791 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:53.259801 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:53.259812 | orchestrator | 2026-04-16 05:42:53.259823 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:42:53.259844 | orchestrator | Thursday 16 April 2026 05:42:45 +0000 (0:00:01.132) 0:00:43.957 ******** 2026-04-16 05:42:53.259855 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.259866 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.259877 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.259887 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.259898 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.259908 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.259919 | orchestrator | 2026-04-16 05:42:53.259930 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:42:53.259941 | orchestrator | Thursday 16 April 2026 05:42:46 +0000 (0:00:00.596) 0:00:44.553 ******** 2026-04-16 05:42:53.259951 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.259962 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.259973 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.259983 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.259994 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.260004 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.260015 | orchestrator | 2026-04-16 05:42:53.260026 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:42:53.260036 | orchestrator | Thursday 16 April 2026 05:42:46 +0000 (0:00:00.717) 0:00:45.271 ******** 2026-04-16 05:42:53.260047 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.260058 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.260068 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.260079 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:53.260117 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:53.260128 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:53.260139 | orchestrator | 2026-04-16 05:42:53.260150 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:42:53.260161 | orchestrator | Thursday 16 April 2026 05:42:47 +0000 (0:00:00.969) 0:00:46.240 ******** 2026-04-16 05:42:53.260172 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.260182 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.260193 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.260203 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:53.260214 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:53.260224 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:53.260235 | orchestrator | 2026-04-16 05:42:53.260246 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:42:53.260256 | orchestrator | Thursday 16 April 2026 05:42:49 +0000 (0:00:01.239) 0:00:47.480 ******** 2026-04-16 05:42:53.260267 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.260330 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.260343 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.260354 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.260365 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.260376 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.260386 | orchestrator | 2026-04-16 05:42:53.260398 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:42:53.260409 | orchestrator | Thursday 16 April 2026 05:42:49 +0000 (0:00:00.554) 0:00:48.035 ******** 2026-04-16 05:42:53.260420 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.260430 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.260441 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.260451 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:42:53.260462 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:42:53.260473 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:42:53.260484 | orchestrator | 2026-04-16 05:42:53.260495 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:42:53.260505 | orchestrator | Thursday 16 April 2026 05:42:50 +0000 (0:00:00.748) 0:00:48.783 ******** 2026-04-16 05:42:53.260516 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.260527 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.260545 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.260556 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.260567 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.260578 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.260588 | orchestrator | 2026-04-16 05:42:53.260599 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:42:53.260610 | orchestrator | Thursday 16 April 2026 05:42:50 +0000 (0:00:00.568) 0:00:49.351 ******** 2026-04-16 05:42:53.260621 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.260632 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.260642 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.260653 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.260664 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.260674 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.260685 | orchestrator | 2026-04-16 05:42:53.260696 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:42:53.260707 | orchestrator | Thursday 16 April 2026 05:42:51 +0000 (0:00:00.740) 0:00:50.091 ******** 2026-04-16 05:42:53.260718 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:42:53.260728 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:42:53.260739 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:42:53.260750 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.260761 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.260779 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.260790 | orchestrator | 2026-04-16 05:42:53.260801 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:42:53.260812 | orchestrator | Thursday 16 April 2026 05:42:52 +0000 (0:00:00.552) 0:00:50.643 ******** 2026-04-16 05:42:53.260823 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.260833 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:42:53.260844 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:42:53.260855 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:42:53.260865 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:42:53.260876 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:42:53.260886 | orchestrator | 2026-04-16 05:42:53.260897 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:42:53.260908 | orchestrator | Thursday 16 April 2026 05:42:52 +0000 (0:00:00.712) 0:00:51.356 ******** 2026-04-16 05:42:53.260919 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:42:53.260937 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.128836 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.129006 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.129055 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:14.129079 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:14.129099 | orchestrator | 2026-04-16 05:44:14.129120 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:44:14.129142 | orchestrator | Thursday 16 April 2026 05:42:53 +0000 (0:00:00.534) 0:00:51.891 ******** 2026-04-16 05:44:14.129162 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.129183 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.129231 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.129252 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:14.129271 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:14.129290 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:14.129311 | orchestrator | 2026-04-16 05:44:14.129331 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:44:14.129354 | orchestrator | Thursday 16 April 2026 05:42:54 +0000 (0:00:00.774) 0:00:52.665 ******** 2026-04-16 05:44:14.129377 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:14.129399 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:14.129440 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:14.129461 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:14.129480 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:14.129498 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:14.129546 | orchestrator | 2026-04-16 05:44:14.129566 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:44:14.129583 | orchestrator | Thursday 16 April 2026 05:42:54 +0000 (0:00:00.603) 0:00:53.269 ******** 2026-04-16 05:44:14.129600 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:14.129616 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:14.129632 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:14.129650 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:14.129669 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:14.129688 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:14.129707 | orchestrator | 2026-04-16 05:44:14.129726 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 05:44:14.129745 | orchestrator | Thursday 16 April 2026 05:42:56 +0000 (0:00:01.199) 0:00:54.469 ******** 2026-04-16 05:44:14.129764 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:44:14.129783 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:44:14.129802 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:44:14.129821 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:44:14.129840 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:44:14.129859 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:44:14.129879 | orchestrator | 2026-04-16 05:44:14.129898 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 05:44:14.129917 | orchestrator | Thursday 16 April 2026 05:42:57 +0000 (0:00:01.680) 0:00:56.149 ******** 2026-04-16 05:44:14.129936 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:44:14.129954 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:44:14.129973 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:44:14.129992 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:44:14.130010 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:44:14.130126 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:44:14.130146 | orchestrator | 2026-04-16 05:44:14.130164 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 05:44:14.130182 | orchestrator | Thursday 16 April 2026 05:42:59 +0000 (0:00:02.055) 0:00:58.205 ******** 2026-04-16 05:44:14.130200 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:44:14.130221 | orchestrator | 2026-04-16 05:44:14.130238 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 05:44:14.130255 | orchestrator | Thursday 16 April 2026 05:43:01 +0000 (0:00:01.322) 0:00:59.527 ******** 2026-04-16 05:44:14.130271 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.130288 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.130304 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.130320 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.130337 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:14.130354 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:14.130371 | orchestrator | 2026-04-16 05:44:14.130387 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 05:44:14.130404 | orchestrator | Thursday 16 April 2026 05:43:01 +0000 (0:00:00.565) 0:01:00.093 ******** 2026-04-16 05:44:14.130421 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.130437 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.130454 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.130470 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.130486 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:14.130501 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:14.130517 | orchestrator | 2026-04-16 05:44:14.130533 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 05:44:14.130548 | orchestrator | Thursday 16 April 2026 05:43:02 +0000 (0:00:00.731) 0:01:00.825 ******** 2026-04-16 05:44:14.130564 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 05:44:14.130597 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 05:44:14.130626 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 05:44:14.130642 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 05:44:14.130658 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 05:44:14.130674 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 05:44:14.130690 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 05:44:14.130706 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 05:44:14.130722 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 05:44:14.130760 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 05:44:14.130777 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 05:44:14.130792 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 05:44:14.130808 | orchestrator | 2026-04-16 05:44:14.130824 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 05:44:14.130840 | orchestrator | Thursday 16 April 2026 05:43:03 +0000 (0:00:01.273) 0:01:02.099 ******** 2026-04-16 05:44:14.130855 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:44:14.130870 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:44:14.130886 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:44:14.130902 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:44:14.130918 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:44:14.130933 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:44:14.130949 | orchestrator | 2026-04-16 05:44:14.130965 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 05:44:14.130982 | orchestrator | Thursday 16 April 2026 05:43:04 +0000 (0:00:01.057) 0:01:03.156 ******** 2026-04-16 05:44:14.130999 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.131016 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.131091 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.131110 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.131127 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:14.131143 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:14.131160 | orchestrator | 2026-04-16 05:44:14.131175 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 05:44:14.131190 | orchestrator | Thursday 16 April 2026 05:43:05 +0000 (0:00:00.560) 0:01:03.717 ******** 2026-04-16 05:44:14.131206 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.131222 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.131238 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.131253 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.131269 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:14.131343 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:14.131362 | orchestrator | 2026-04-16 05:44:14.131379 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 05:44:14.131396 | orchestrator | Thursday 16 April 2026 05:43:06 +0000 (0:00:00.709) 0:01:04.426 ******** 2026-04-16 05:44:14.131412 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.131429 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.131446 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.131463 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.131480 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:14.131496 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:14.131513 | orchestrator | 2026-04-16 05:44:14.131529 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 05:44:14.131546 | orchestrator | Thursday 16 April 2026 05:43:06 +0000 (0:00:00.548) 0:01:04.975 ******** 2026-04-16 05:44:14.131575 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:44:14.131592 | orchestrator | 2026-04-16 05:44:14.131609 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 05:44:14.131626 | orchestrator | Thursday 16 April 2026 05:43:07 +0000 (0:00:01.147) 0:01:06.122 ******** 2026-04-16 05:44:14.131643 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:14.131660 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:14.131677 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:14.131693 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:14.131710 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:14.131726 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:14.131743 | orchestrator | 2026-04-16 05:44:14.131759 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 05:44:14.131776 | orchestrator | Thursday 16 April 2026 05:44:13 +0000 (0:01:05.748) 0:02:11.870 ******** 2026-04-16 05:44:14.131793 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 05:44:14.131810 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 05:44:14.131827 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 05:44:14.131843 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:14.131860 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 05:44:14.131877 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 05:44:14.131894 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 05:44:14.131910 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:14.131927 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 05:44:14.131943 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 05:44:14.131969 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 05:44:14.131986 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:14.132003 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 05:44:14.132020 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 05:44:14.132093 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 05:44:14.132111 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:14.132129 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 05:44:14.132146 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 05:44:14.132164 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 05:44:14.132194 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.039392 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 05:44:36.039492 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 05:44:36.039509 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 05:44:36.039522 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.039535 | orchestrator | 2026-04-16 05:44:36.039546 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 05:44:36.039553 | orchestrator | Thursday 16 April 2026 05:44:14 +0000 (0:00:00.607) 0:02:12.477 ******** 2026-04-16 05:44:36.039559 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.039565 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.039572 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.039579 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.039585 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.039609 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.039616 | orchestrator | 2026-04-16 05:44:36.039622 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 05:44:36.039629 | orchestrator | Thursday 16 April 2026 05:44:14 +0000 (0:00:00.713) 0:02:13.191 ******** 2026-04-16 05:44:36.039635 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.039641 | orchestrator | 2026-04-16 05:44:36.039647 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 05:44:36.039653 | orchestrator | Thursday 16 April 2026 05:44:14 +0000 (0:00:00.148) 0:02:13.339 ******** 2026-04-16 05:44:36.039660 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.039666 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.039672 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.039678 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.039684 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.039690 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.039696 | orchestrator | 2026-04-16 05:44:36.039702 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 05:44:36.039709 | orchestrator | Thursday 16 April 2026 05:44:15 +0000 (0:00:00.571) 0:02:13.910 ******** 2026-04-16 05:44:36.039715 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.039721 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.039727 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.039733 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.039739 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.039745 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.039751 | orchestrator | 2026-04-16 05:44:36.039757 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 05:44:36.039763 | orchestrator | Thursday 16 April 2026 05:44:16 +0000 (0:00:00.732) 0:02:14.643 ******** 2026-04-16 05:44:36.039770 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.039776 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.039782 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.039788 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.039795 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.039801 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.039807 | orchestrator | 2026-04-16 05:44:36.039813 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 05:44:36.039819 | orchestrator | Thursday 16 April 2026 05:44:16 +0000 (0:00:00.567) 0:02:15.210 ******** 2026-04-16 05:44:36.039825 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:36.039833 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:36.039839 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:36.039845 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:36.039851 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:36.039857 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:36.039863 | orchestrator | 2026-04-16 05:44:36.039869 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 05:44:36.039876 | orchestrator | Thursday 16 April 2026 05:44:19 +0000 (0:00:03.130) 0:02:18.341 ******** 2026-04-16 05:44:36.039882 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:36.039888 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:36.039894 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:36.039900 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:36.039906 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:36.039912 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:36.039918 | orchestrator | 2026-04-16 05:44:36.039924 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 05:44:36.039931 | orchestrator | Thursday 16 April 2026 05:44:20 +0000 (0:00:00.562) 0:02:18.904 ******** 2026-04-16 05:44:36.039938 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:44:36.039946 | orchestrator | 2026-04-16 05:44:36.039953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 05:44:36.039964 | orchestrator | Thursday 16 April 2026 05:44:21 +0000 (0:00:01.163) 0:02:20.068 ******** 2026-04-16 05:44:36.039970 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.039976 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.039982 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.039989 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040007 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040050 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040058 | orchestrator | 2026-04-16 05:44:36.040064 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 05:44:36.040070 | orchestrator | Thursday 16 April 2026 05:44:22 +0000 (0:00:00.765) 0:02:20.833 ******** 2026-04-16 05:44:36.040076 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040082 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040089 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040095 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040101 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040107 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040113 | orchestrator | 2026-04-16 05:44:36.040119 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 05:44:36.040125 | orchestrator | Thursday 16 April 2026 05:44:23 +0000 (0:00:00.593) 0:02:21.426 ******** 2026-04-16 05:44:36.040131 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040150 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040157 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040163 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040169 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040175 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040181 | orchestrator | 2026-04-16 05:44:36.040188 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 05:44:36.040194 | orchestrator | Thursday 16 April 2026 05:44:23 +0000 (0:00:00.764) 0:02:22.191 ******** 2026-04-16 05:44:36.040200 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040206 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040213 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040219 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040225 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040231 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040237 | orchestrator | 2026-04-16 05:44:36.040243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 05:44:36.040249 | orchestrator | Thursday 16 April 2026 05:44:24 +0000 (0:00:00.591) 0:02:22.782 ******** 2026-04-16 05:44:36.040255 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040261 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040267 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040273 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040280 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040286 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040292 | orchestrator | 2026-04-16 05:44:36.040298 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 05:44:36.040304 | orchestrator | Thursday 16 April 2026 05:44:25 +0000 (0:00:00.770) 0:02:23.552 ******** 2026-04-16 05:44:36.040310 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040316 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040322 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040328 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040335 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040341 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040347 | orchestrator | 2026-04-16 05:44:36.040353 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 05:44:36.040359 | orchestrator | Thursday 16 April 2026 05:44:25 +0000 (0:00:00.589) 0:02:24.142 ******** 2026-04-16 05:44:36.040370 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040376 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040382 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040388 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040395 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040401 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040407 | orchestrator | 2026-04-16 05:44:36.040413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 05:44:36.040419 | orchestrator | Thursday 16 April 2026 05:44:26 +0000 (0:00:00.784) 0:02:24.927 ******** 2026-04-16 05:44:36.040426 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:36.040432 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:36.040438 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:36.040444 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:36.040450 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:36.040456 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:36.040462 | orchestrator | 2026-04-16 05:44:36.040468 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 05:44:36.040474 | orchestrator | Thursday 16 April 2026 05:44:27 +0000 (0:00:00.562) 0:02:25.489 ******** 2026-04-16 05:44:36.040480 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:36.040486 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:36.040493 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:36.040499 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:44:36.040505 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:44:36.040511 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:44:36.040517 | orchestrator | 2026-04-16 05:44:36.040523 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 05:44:36.040530 | orchestrator | Thursday 16 April 2026 05:44:28 +0000 (0:00:01.173) 0:02:26.663 ******** 2026-04-16 05:44:36.040537 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:44:36.040545 | orchestrator | 2026-04-16 05:44:36.040551 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 05:44:36.040557 | orchestrator | Thursday 16 April 2026 05:44:29 +0000 (0:00:01.187) 0:02:27.850 ******** 2026-04-16 05:44:36.040563 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-04-16 05:44:36.040569 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-16 05:44:36.040576 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-04-16 05:44:36.040582 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-04-16 05:44:36.040588 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-04-16 05:44:36.040594 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-16 05:44:36.040600 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-16 05:44:36.040610 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-04-16 05:44:36.040616 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-16 05:44:36.040622 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-04-16 05:44:36.040628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-16 05:44:36.040634 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-16 05:44:36.040640 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-16 05:44:36.040647 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-16 05:44:36.040653 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-16 05:44:36.040659 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-16 05:44:36.040666 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-16 05:44:36.040676 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-16 05:44:41.175188 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-16 05:44:41.175345 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-16 05:44:41.175375 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-16 05:44:41.175389 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-16 05:44:41.175400 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-16 05:44:41.175412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-16 05:44:41.175423 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-16 05:44:41.175434 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-16 05:44:41.175445 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-16 05:44:41.175456 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-16 05:44:41.175467 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-16 05:44:41.175479 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-16 05:44:41.175490 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-16 05:44:41.175501 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-16 05:44:41.175511 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-16 05:44:41.175522 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-16 05:44:41.175533 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-16 05:44:41.175544 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-16 05:44:41.175555 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-16 05:44:41.175566 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-16 05:44:41.175577 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-16 05:44:41.175588 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-16 05:44:41.175599 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 05:44:41.175610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-16 05:44:41.175621 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-16 05:44:41.175633 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-16 05:44:41.175647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-16 05:44:41.175659 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-16 05:44:41.175670 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 05:44:41.175684 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-16 05:44:41.175696 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 05:44:41.175708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 05:44:41.175720 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-16 05:44:41.175732 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-16 05:44:41.175744 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 05:44:41.175757 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 05:44:41.175769 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 05:44:41.175782 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 05:44:41.175794 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 05:44:41.175806 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 05:44:41.175818 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 05:44:41.175831 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 05:44:41.175843 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 05:44:41.175864 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 05:44:41.175876 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 05:44:41.175888 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 05:44:41.175901 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 05:44:41.175913 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 05:44:41.175925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 05:44:41.175954 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 05:44:41.175967 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 05:44:41.175980 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 05:44:41.175992 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 05:44:41.176004 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 05:44:41.176062 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 05:44:41.176074 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 05:44:41.176085 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 05:44:41.176096 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 05:44:41.176126 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-04-16 05:44:41.176139 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 05:44:41.176149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 05:44:41.176160 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 05:44:41.176171 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 05:44:41.176181 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-04-16 05:44:41.176192 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 05:44:41.176203 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-04-16 05:44:41.176214 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 05:44:41.176224 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 05:44:41.176235 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-04-16 05:44:41.176246 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 05:44:41.176256 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-04-16 05:44:41.176267 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-04-16 05:44:41.176278 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-04-16 05:44:41.176288 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-04-16 05:44:41.176299 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-04-16 05:44:41.176310 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-04-16 05:44:41.176320 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-04-16 05:44:41.176331 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-04-16 05:44:41.176342 | orchestrator | 2026-04-16 05:44:41.176353 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 05:44:41.176364 | orchestrator | Thursday 16 April 2026 05:44:36 +0000 (0:00:06.528) 0:02:34.379 ******** 2026-04-16 05:44:41.176375 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:41.176386 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:41.176396 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:41.176408 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:44:41.176429 | orchestrator | 2026-04-16 05:44:41.176441 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 05:44:41.176451 | orchestrator | Thursday 16 April 2026 05:44:36 +0000 (0:00:00.947) 0:02:35.326 ******** 2026-04-16 05:44:41.176462 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:41.176473 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:41.176484 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:41.176495 | orchestrator | 2026-04-16 05:44:41.176506 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 05:44:41.176517 | orchestrator | Thursday 16 April 2026 05:44:37 +0000 (0:00:00.686) 0:02:36.013 ******** 2026-04-16 05:44:41.176528 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:41.176538 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:41.176549 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:41.176560 | orchestrator | 2026-04-16 05:44:41.176570 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 05:44:41.176581 | orchestrator | Thursday 16 April 2026 05:44:38 +0000 (0:00:01.147) 0:02:37.160 ******** 2026-04-16 05:44:41.176592 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:41.176603 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:41.176614 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:41.176625 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:41.176635 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:41.176645 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:41.176656 | orchestrator | 2026-04-16 05:44:41.176667 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 05:44:41.176684 | orchestrator | Thursday 16 April 2026 05:44:39 +0000 (0:00:00.895) 0:02:38.056 ******** 2026-04-16 05:44:41.176695 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:41.176706 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:41.176717 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:41.176727 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:41.176738 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:41.176749 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:41.176759 | orchestrator | 2026-04-16 05:44:41.176770 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 05:44:41.176781 | orchestrator | Thursday 16 April 2026 05:44:40 +0000 (0:00:00.608) 0:02:38.664 ******** 2026-04-16 05:44:41.176792 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:41.176802 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:41.176813 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:41.176824 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:41.176836 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:41.176855 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:41.176873 | orchestrator | 2026-04-16 05:44:41.176901 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 05:44:53.581296 | orchestrator | Thursday 16 April 2026 05:44:41 +0000 (0:00:00.860) 0:02:39.525 ******** 2026-04-16 05:44:53.581429 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.581455 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.581473 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.581491 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.581508 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.581526 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.581571 | orchestrator | 2026-04-16 05:44:53.581591 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 05:44:53.581607 | orchestrator | Thursday 16 April 2026 05:44:41 +0000 (0:00:00.569) 0:02:40.095 ******** 2026-04-16 05:44:53.581623 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.581639 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.581657 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.581675 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.581691 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.581706 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.581722 | orchestrator | 2026-04-16 05:44:53.581740 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 05:44:53.581758 | orchestrator | Thursday 16 April 2026 05:44:42 +0000 (0:00:00.753) 0:02:40.848 ******** 2026-04-16 05:44:53.581776 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.581794 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.581812 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.581828 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.581847 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.581865 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.581882 | orchestrator | 2026-04-16 05:44:53.581901 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 05:44:53.581919 | orchestrator | Thursday 16 April 2026 05:44:43 +0000 (0:00:00.570) 0:02:41.419 ******** 2026-04-16 05:44:53.581938 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.581956 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.581974 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.581992 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.582108 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.582127 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.582144 | orchestrator | 2026-04-16 05:44:53.582158 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 05:44:53.582175 | orchestrator | Thursday 16 April 2026 05:44:43 +0000 (0:00:00.786) 0:02:42.205 ******** 2026-04-16 05:44:53.582192 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.582209 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.582225 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.582241 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.582258 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.582275 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.582291 | orchestrator | 2026-04-16 05:44:53.582308 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 05:44:53.582325 | orchestrator | Thursday 16 April 2026 05:44:44 +0000 (0:00:00.581) 0:02:42.787 ******** 2026-04-16 05:44:53.582341 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.582358 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.582374 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.582391 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:53.582409 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:53.582426 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:53.582441 | orchestrator | 2026-04-16 05:44:53.582457 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 05:44:53.582474 | orchestrator | Thursday 16 April 2026 05:44:47 +0000 (0:00:02.802) 0:02:45.589 ******** 2026-04-16 05:44:53.582490 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:53.582506 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:53.582523 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:53.582539 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.582555 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.582571 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.582588 | orchestrator | 2026-04-16 05:44:53.582604 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 05:44:53.582637 | orchestrator | Thursday 16 April 2026 05:44:47 +0000 (0:00:00.589) 0:02:46.179 ******** 2026-04-16 05:44:53.582653 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:44:53.582669 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:44:53.582685 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:44:53.582701 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.582717 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.582733 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.582749 | orchestrator | 2026-04-16 05:44:53.582765 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 05:44:53.582781 | orchestrator | Thursday 16 April 2026 05:44:48 +0000 (0:00:00.901) 0:02:47.080 ******** 2026-04-16 05:44:53.582797 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.582813 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.582829 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.582864 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.582880 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.582896 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.582912 | orchestrator | 2026-04-16 05:44:53.582928 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 05:44:53.582945 | orchestrator | Thursday 16 April 2026 05:44:49 +0000 (0:00:00.635) 0:02:47.716 ******** 2026-04-16 05:44:53.582962 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:53.582980 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:53.582996 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 05:44:53.583040 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.583081 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.583100 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.583117 | orchestrator | 2026-04-16 05:44:53.583134 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 05:44:53.583151 | orchestrator | Thursday 16 April 2026 05:44:50 +0000 (0:00:00.857) 0:02:48.573 ******** 2026-04-16 05:44:53.583172 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-04-16 05:44:53.583194 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-04-16 05:44:53.583213 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-04-16 05:44:53.583231 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-04-16 05:44:53.583248 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.583265 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-04-16 05:44:53.583295 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-04-16 05:44:53.583312 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.583328 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.583345 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.583362 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.583379 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.583396 | orchestrator | 2026-04-16 05:44:53.583413 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 05:44:53.583430 | orchestrator | Thursday 16 April 2026 05:44:50 +0000 (0:00:00.655) 0:02:49.229 ******** 2026-04-16 05:44:53.583447 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.583462 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.583479 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.583496 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.583513 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.583530 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.583546 | orchestrator | 2026-04-16 05:44:53.583563 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 05:44:53.583581 | orchestrator | Thursday 16 April 2026 05:44:51 +0000 (0:00:00.762) 0:02:49.992 ******** 2026-04-16 05:44:53.583598 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.583615 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.583632 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.583648 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.583665 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.583683 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.583699 | orchestrator | 2026-04-16 05:44:53.583716 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 05:44:53.583743 | orchestrator | Thursday 16 April 2026 05:44:52 +0000 (0:00:00.550) 0:02:50.542 ******** 2026-04-16 05:44:53.583760 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.583770 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.583780 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.583789 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.583798 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.583808 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.583817 | orchestrator | 2026-04-16 05:44:53.583827 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 05:44:53.583837 | orchestrator | Thursday 16 April 2026 05:44:52 +0000 (0:00:00.799) 0:02:51.342 ******** 2026-04-16 05:44:53.583847 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:44:53.583856 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:44:53.583866 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:44:53.583875 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:44:53.583884 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:44:53.583894 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:44:53.583903 | orchestrator | 2026-04-16 05:44:53.583913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 05:44:53.583931 | orchestrator | Thursday 16 April 2026 05:44:53 +0000 (0:00:00.585) 0:02:51.928 ******** 2026-04-16 05:45:11.007291 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.007400 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:11.007416 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:11.007428 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:11.007445 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:11.007465 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:11.007509 | orchestrator | 2026-04-16 05:45:11.007531 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 05:45:11.007551 | orchestrator | Thursday 16 April 2026 05:44:54 +0000 (0:00:00.898) 0:02:52.826 ******** 2026-04-16 05:45:11.007572 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:45:11.007586 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:45:11.007597 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:45:11.007610 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:11.007629 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:11.007647 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:11.007665 | orchestrator | 2026-04-16 05:45:11.007684 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 05:45:11.007701 | orchestrator | Thursday 16 April 2026 05:44:55 +0000 (0:00:00.898) 0:02:53.725 ******** 2026-04-16 05:45:11.007720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:11.007739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:11.007759 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:11.007779 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.007798 | orchestrator | 2026-04-16 05:45:11.007815 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 05:45:11.007826 | orchestrator | Thursday 16 April 2026 05:44:55 +0000 (0:00:00.445) 0:02:54.170 ******** 2026-04-16 05:45:11.007837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:11.007847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:11.007867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:11.007885 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.007904 | orchestrator | 2026-04-16 05:45:11.007923 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 05:45:11.007941 | orchestrator | Thursday 16 April 2026 05:44:56 +0000 (0:00:00.431) 0:02:54.601 ******** 2026-04-16 05:45:11.007961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:11.007980 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:11.008021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:11.008040 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.008058 | orchestrator | 2026-04-16 05:45:11.008078 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 05:45:11.008097 | orchestrator | Thursday 16 April 2026 05:44:56 +0000 (0:00:00.407) 0:02:55.009 ******** 2026-04-16 05:45:11.008119 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:45:11.008139 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:45:11.008160 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:45:11.008178 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:11.008197 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:11.008217 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:11.008236 | orchestrator | 2026-04-16 05:45:11.008254 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 05:45:11.008265 | orchestrator | Thursday 16 April 2026 05:44:57 +0000 (0:00:00.598) 0:02:55.608 ******** 2026-04-16 05:45:11.008276 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 05:45:11.008286 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 05:45:11.008297 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 05:45:11.008307 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-16 05:45:11.008318 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:11.008329 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-16 05:45:11.008339 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:11.008350 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-16 05:45:11.008360 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:11.008371 | orchestrator | 2026-04-16 05:45:11.008381 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 05:45:11.008406 | orchestrator | Thursday 16 April 2026 05:44:58 +0000 (0:00:01.748) 0:02:57.357 ******** 2026-04-16 05:45:11.008417 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:45:11.008427 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:45:11.008437 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:45:11.008448 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:11.008458 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:11.008469 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:11.008479 | orchestrator | 2026-04-16 05:45:11.008490 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-16 05:45:11.008500 | orchestrator | Thursday 16 April 2026 05:45:01 +0000 (0:00:02.677) 0:03:00.034 ******** 2026-04-16 05:45:11.008511 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:45:11.008534 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:45:11.008544 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:45:11.008555 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:11.008566 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:11.008576 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:11.008587 | orchestrator | 2026-04-16 05:45:11.008598 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-16 05:45:11.008608 | orchestrator | Thursday 16 April 2026 05:45:02 +0000 (0:00:01.062) 0:03:01.096 ******** 2026-04-16 05:45:11.008619 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.008629 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:11.008640 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:11.008651 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:45:11.008662 | orchestrator | 2026-04-16 05:45:11.008673 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-16 05:45:11.008683 | orchestrator | Thursday 16 April 2026 05:45:03 +0000 (0:00:01.122) 0:03:02.219 ******** 2026-04-16 05:45:11.008694 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:11.008723 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:11.008734 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:11.008745 | orchestrator | 2026-04-16 05:45:11.008755 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-16 05:45:11.008766 | orchestrator | Thursday 16 April 2026 05:45:04 +0000 (0:00:00.333) 0:03:02.552 ******** 2026-04-16 05:45:11.008776 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:11.008787 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:11.008797 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:11.008808 | orchestrator | 2026-04-16 05:45:11.008819 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-16 05:45:11.008829 | orchestrator | Thursday 16 April 2026 05:45:05 +0000 (0:00:01.419) 0:03:03.972 ******** 2026-04-16 05:45:11.008840 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 05:45:11.008851 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 05:45:11.008861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 05:45:11.008872 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:11.008882 | orchestrator | 2026-04-16 05:45:11.008893 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-16 05:45:11.008903 | orchestrator | Thursday 16 April 2026 05:45:06 +0000 (0:00:00.608) 0:03:04.580 ******** 2026-04-16 05:45:11.008914 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:11.008925 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:11.008936 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:11.008946 | orchestrator | 2026-04-16 05:45:11.008957 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-16 05:45:11.008968 | orchestrator | Thursday 16 April 2026 05:45:06 +0000 (0:00:00.334) 0:03:04.914 ******** 2026-04-16 05:45:11.008978 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:11.008989 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:11.009037 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:11.009056 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:45:11.009067 | orchestrator | 2026-04-16 05:45:11.009078 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-16 05:45:11.009088 | orchestrator | Thursday 16 April 2026 05:45:07 +0000 (0:00:00.995) 0:03:05.910 ******** 2026-04-16 05:45:11.009099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:11.009110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:11.009120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:11.009131 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009141 | orchestrator | 2026-04-16 05:45:11.009152 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-16 05:45:11.009163 | orchestrator | Thursday 16 April 2026 05:45:07 +0000 (0:00:00.404) 0:03:06.315 ******** 2026-04-16 05:45:11.009174 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009184 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:11.009195 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:11.009205 | orchestrator | 2026-04-16 05:45:11.009216 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-16 05:45:11.009227 | orchestrator | Thursday 16 April 2026 05:45:08 +0000 (0:00:00.349) 0:03:06.664 ******** 2026-04-16 05:45:11.009237 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009248 | orchestrator | 2026-04-16 05:45:11.009259 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-16 05:45:11.009269 | orchestrator | Thursday 16 April 2026 05:45:08 +0000 (0:00:00.218) 0:03:06.883 ******** 2026-04-16 05:45:11.009280 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009290 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:11.009301 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:11.009312 | orchestrator | 2026-04-16 05:45:11.009322 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-16 05:45:11.009333 | orchestrator | Thursday 16 April 2026 05:45:08 +0000 (0:00:00.303) 0:03:07.186 ******** 2026-04-16 05:45:11.009344 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009354 | orchestrator | 2026-04-16 05:45:11.009365 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-16 05:45:11.009376 | orchestrator | Thursday 16 April 2026 05:45:09 +0000 (0:00:00.636) 0:03:07.823 ******** 2026-04-16 05:45:11.009386 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009397 | orchestrator | 2026-04-16 05:45:11.009408 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-16 05:45:11.009419 | orchestrator | Thursday 16 April 2026 05:45:09 +0000 (0:00:00.240) 0:03:08.063 ******** 2026-04-16 05:45:11.009429 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009440 | orchestrator | 2026-04-16 05:45:11.009450 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-16 05:45:11.009461 | orchestrator | Thursday 16 April 2026 05:45:09 +0000 (0:00:00.148) 0:03:08.211 ******** 2026-04-16 05:45:11.009472 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009482 | orchestrator | 2026-04-16 05:45:11.009499 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-16 05:45:11.009510 | orchestrator | Thursday 16 April 2026 05:45:10 +0000 (0:00:00.258) 0:03:08.470 ******** 2026-04-16 05:45:11.009521 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009532 | orchestrator | 2026-04-16 05:45:11.009542 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-16 05:45:11.009553 | orchestrator | Thursday 16 April 2026 05:45:10 +0000 (0:00:00.246) 0:03:08.716 ******** 2026-04-16 05:45:11.009564 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:11.009575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:11.009585 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:11.009603 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:11.009614 | orchestrator | 2026-04-16 05:45:11.009624 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-16 05:45:11.009635 | orchestrator | Thursday 16 April 2026 05:45:10 +0000 (0:00:00.430) 0:03:09.147 ******** 2026-04-16 05:45:11.009652 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.204438 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:28.204601 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:28.204631 | orchestrator | 2026-04-16 05:45:28.204653 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-16 05:45:28.204674 | orchestrator | Thursday 16 April 2026 05:45:11 +0000 (0:00:00.339) 0:03:09.486 ******** 2026-04-16 05:45:28.204694 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.204713 | orchestrator | 2026-04-16 05:45:28.204732 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-16 05:45:28.204750 | orchestrator | Thursday 16 April 2026 05:45:11 +0000 (0:00:00.226) 0:03:09.713 ******** 2026-04-16 05:45:28.204770 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.204789 | orchestrator | 2026-04-16 05:45:28.204808 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-16 05:45:28.204827 | orchestrator | Thursday 16 April 2026 05:45:11 +0000 (0:00:00.213) 0:03:09.926 ******** 2026-04-16 05:45:28.204846 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.204865 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.204884 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.204903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:45:28.204922 | orchestrator | 2026-04-16 05:45:28.204940 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-16 05:45:28.204960 | orchestrator | Thursday 16 April 2026 05:45:12 +0000 (0:00:01.021) 0:03:10.947 ******** 2026-04-16 05:45:28.205010 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:45:28.205035 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:45:28.205055 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:45:28.205074 | orchestrator | 2026-04-16 05:45:28.205094 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-16 05:45:28.205113 | orchestrator | Thursday 16 April 2026 05:45:12 +0000 (0:00:00.300) 0:03:11.247 ******** 2026-04-16 05:45:28.205132 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:45:28.205151 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:45:28.205170 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:45:28.205189 | orchestrator | 2026-04-16 05:45:28.205208 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-16 05:45:28.205226 | orchestrator | Thursday 16 April 2026 05:45:14 +0000 (0:00:01.460) 0:03:12.707 ******** 2026-04-16 05:45:28.205244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:28.205262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:28.205280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:28.205299 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.205317 | orchestrator | 2026-04-16 05:45:28.205337 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-16 05:45:28.205354 | orchestrator | Thursday 16 April 2026 05:45:14 +0000 (0:00:00.651) 0:03:13.359 ******** 2026-04-16 05:45:28.205372 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:45:28.205390 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:45:28.205408 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:45:28.205427 | orchestrator | 2026-04-16 05:45:28.205446 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-16 05:45:28.205465 | orchestrator | Thursday 16 April 2026 05:45:15 +0000 (0:00:00.312) 0:03:13.672 ******** 2026-04-16 05:45:28.205483 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.205501 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.205518 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.205566 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:45:28.205585 | orchestrator | 2026-04-16 05:45:28.205603 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-16 05:45:28.205621 | orchestrator | Thursday 16 April 2026 05:45:16 +0000 (0:00:00.983) 0:03:14.655 ******** 2026-04-16 05:45:28.205639 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:45:28.205657 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:45:28.205674 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:45:28.205692 | orchestrator | 2026-04-16 05:45:28.205711 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-16 05:45:28.205730 | orchestrator | Thursday 16 April 2026 05:45:16 +0000 (0:00:00.332) 0:03:14.988 ******** 2026-04-16 05:45:28.205749 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:45:28.205767 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:45:28.205787 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:45:28.205805 | orchestrator | 2026-04-16 05:45:28.205823 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-16 05:45:28.205834 | orchestrator | Thursday 16 April 2026 05:45:17 +0000 (0:00:01.157) 0:03:16.146 ******** 2026-04-16 05:45:28.205845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:45:28.205856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:45:28.205882 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:45:28.205893 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.205904 | orchestrator | 2026-04-16 05:45:28.205914 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-16 05:45:28.205925 | orchestrator | Thursday 16 April 2026 05:45:18 +0000 (0:00:00.800) 0:03:16.946 ******** 2026-04-16 05:45:28.205936 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:45:28.205946 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:45:28.205957 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:45:28.205968 | orchestrator | 2026-04-16 05:45:28.205979 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-04-16 05:45:28.206077 | orchestrator | Thursday 16 April 2026 05:45:19 +0000 (0:00:00.511) 0:03:17.458 ******** 2026-04-16 05:45:28.206093 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.206103 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:28.206114 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:28.206125 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.206136 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.206146 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.206168 | orchestrator | 2026-04-16 05:45:28.206199 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-16 05:45:28.206211 | orchestrator | Thursday 16 April 2026 05:45:19 +0000 (0:00:00.633) 0:03:18.091 ******** 2026-04-16 05:45:28.206222 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:45:28.206233 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:45:28.206244 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:45:28.206255 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:45:28.206266 | orchestrator | 2026-04-16 05:45:28.206277 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-16 05:45:28.206288 | orchestrator | Thursday 16 April 2026 05:45:20 +0000 (0:00:00.980) 0:03:19.071 ******** 2026-04-16 05:45:28.206298 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:28.206309 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:28.206320 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:28.206331 | orchestrator | 2026-04-16 05:45:28.206344 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-16 05:45:28.206363 | orchestrator | Thursday 16 April 2026 05:45:21 +0000 (0:00:00.306) 0:03:19.378 ******** 2026-04-16 05:45:28.206381 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:28.206413 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:28.206430 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:28.206449 | orchestrator | 2026-04-16 05:45:28.206467 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-16 05:45:28.206487 | orchestrator | Thursday 16 April 2026 05:45:22 +0000 (0:00:01.160) 0:03:20.538 ******** 2026-04-16 05:45:28.206505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 05:45:28.206520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 05:45:28.206531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 05:45:28.206542 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.206553 | orchestrator | 2026-04-16 05:45:28.206563 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-16 05:45:28.206574 | orchestrator | Thursday 16 April 2026 05:45:23 +0000 (0:00:00.824) 0:03:21.362 ******** 2026-04-16 05:45:28.206585 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:28.206596 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:28.206606 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:28.206617 | orchestrator | 2026-04-16 05:45:28.206627 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-04-16 05:45:28.206638 | orchestrator | 2026-04-16 05:45:28.206649 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:45:28.206660 | orchestrator | Thursday 16 April 2026 05:45:23 +0000 (0:00:00.744) 0:03:22.107 ******** 2026-04-16 05:45:28.206672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:45:28.206684 | orchestrator | 2026-04-16 05:45:28.206695 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:45:28.206705 | orchestrator | Thursday 16 April 2026 05:45:24 +0000 (0:00:00.648) 0:03:22.755 ******** 2026-04-16 05:45:28.206716 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:45:28.206727 | orchestrator | 2026-04-16 05:45:28.206738 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:45:28.206748 | orchestrator | Thursday 16 April 2026 05:45:24 +0000 (0:00:00.498) 0:03:23.254 ******** 2026-04-16 05:45:28.206759 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:28.206770 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:28.206780 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:28.206791 | orchestrator | 2026-04-16 05:45:28.206802 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:45:28.206813 | orchestrator | Thursday 16 April 2026 05:45:25 +0000 (0:00:00.715) 0:03:23.970 ******** 2026-04-16 05:45:28.206824 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.206834 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.206845 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.206856 | orchestrator | 2026-04-16 05:45:28.206866 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:45:28.206877 | orchestrator | Thursday 16 April 2026 05:45:26 +0000 (0:00:00.490) 0:03:24.460 ******** 2026-04-16 05:45:28.206888 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.206899 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.206909 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.206920 | orchestrator | 2026-04-16 05:45:28.206931 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:45:28.206942 | orchestrator | Thursday 16 April 2026 05:45:26 +0000 (0:00:00.285) 0:03:24.746 ******** 2026-04-16 05:45:28.206952 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.206963 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.206973 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.207007 | orchestrator | 2026-04-16 05:45:28.207026 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:45:28.207037 | orchestrator | Thursday 16 April 2026 05:45:26 +0000 (0:00:00.304) 0:03:25.051 ******** 2026-04-16 05:45:28.207056 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:28.207067 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:28.207078 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:28.207088 | orchestrator | 2026-04-16 05:45:28.207099 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:45:28.207110 | orchestrator | Thursday 16 April 2026 05:45:27 +0000 (0:00:00.689) 0:03:25.741 ******** 2026-04-16 05:45:28.207121 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.207132 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.207142 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:28.207153 | orchestrator | 2026-04-16 05:45:28.207163 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:45:28.207174 | orchestrator | Thursday 16 April 2026 05:45:27 +0000 (0:00:00.493) 0:03:26.235 ******** 2026-04-16 05:45:28.207185 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:28.207195 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:28.207214 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669013 | orchestrator | 2026-04-16 05:45:49.669128 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:45:49.669146 | orchestrator | Thursday 16 April 2026 05:45:28 +0000 (0:00:00.316) 0:03:26.552 ******** 2026-04-16 05:45:49.669157 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.669169 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.669180 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.669191 | orchestrator | 2026-04-16 05:45:49.669202 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:45:49.669214 | orchestrator | Thursday 16 April 2026 05:45:28 +0000 (0:00:00.715) 0:03:27.268 ******** 2026-04-16 05:45:49.669225 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.669236 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.669246 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.669257 | orchestrator | 2026-04-16 05:45:49.669269 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:45:49.669280 | orchestrator | Thursday 16 April 2026 05:45:29 +0000 (0:00:00.688) 0:03:27.956 ******** 2026-04-16 05:45:49.669291 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.669302 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:49.669313 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669324 | orchestrator | 2026-04-16 05:45:49.669335 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:45:49.669346 | orchestrator | Thursday 16 April 2026 05:45:30 +0000 (0:00:00.478) 0:03:28.435 ******** 2026-04-16 05:45:49.669357 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.669368 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.669379 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.669390 | orchestrator | 2026-04-16 05:45:49.669401 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:45:49.669412 | orchestrator | Thursday 16 April 2026 05:45:30 +0000 (0:00:00.323) 0:03:28.758 ******** 2026-04-16 05:45:49.669423 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.669434 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:49.669445 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669455 | orchestrator | 2026-04-16 05:45:49.669466 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:45:49.669477 | orchestrator | Thursday 16 April 2026 05:45:30 +0000 (0:00:00.293) 0:03:29.052 ******** 2026-04-16 05:45:49.669488 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.669498 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:49.669509 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669520 | orchestrator | 2026-04-16 05:45:49.669533 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:45:49.669547 | orchestrator | Thursday 16 April 2026 05:45:30 +0000 (0:00:00.272) 0:03:29.325 ******** 2026-04-16 05:45:49.669559 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.669594 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:49.669607 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669619 | orchestrator | 2026-04-16 05:45:49.669632 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:45:49.669644 | orchestrator | Thursday 16 April 2026 05:45:31 +0000 (0:00:00.503) 0:03:29.828 ******** 2026-04-16 05:45:49.669656 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.669668 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:49.669681 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669693 | orchestrator | 2026-04-16 05:45:49.669705 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:45:49.669718 | orchestrator | Thursday 16 April 2026 05:45:31 +0000 (0:00:00.284) 0:03:30.113 ******** 2026-04-16 05:45:49.669730 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.669742 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:45:49.669754 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:45:49.669766 | orchestrator | 2026-04-16 05:45:49.669778 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:45:49.669795 | orchestrator | Thursday 16 April 2026 05:45:32 +0000 (0:00:00.287) 0:03:30.400 ******** 2026-04-16 05:45:49.669813 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.669832 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.669849 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.669867 | orchestrator | 2026-04-16 05:45:49.669885 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:45:49.669902 | orchestrator | Thursday 16 April 2026 05:45:32 +0000 (0:00:00.315) 0:03:30.715 ******** 2026-04-16 05:45:49.669921 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.669939 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.669958 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.669999 | orchestrator | 2026-04-16 05:45:49.670076 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:45:49.670092 | orchestrator | Thursday 16 April 2026 05:45:32 +0000 (0:00:00.551) 0:03:31.267 ******** 2026-04-16 05:45:49.670103 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.670114 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.670125 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.670135 | orchestrator | 2026-04-16 05:45:49.670161 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-16 05:45:49.670173 | orchestrator | Thursday 16 April 2026 05:45:33 +0000 (0:00:00.529) 0:03:31.797 ******** 2026-04-16 05:45:49.670184 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.670194 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.670205 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.670215 | orchestrator | 2026-04-16 05:45:49.670226 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-16 05:45:49.670237 | orchestrator | Thursday 16 April 2026 05:45:33 +0000 (0:00:00.310) 0:03:32.107 ******** 2026-04-16 05:45:49.670248 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:45:49.670259 | orchestrator | 2026-04-16 05:45:49.670270 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-16 05:45:49.670280 | orchestrator | Thursday 16 April 2026 05:45:34 +0000 (0:00:00.796) 0:03:32.904 ******** 2026-04-16 05:45:49.670291 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:45:49.670302 | orchestrator | 2026-04-16 05:45:49.670312 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-16 05:45:49.670343 | orchestrator | Thursday 16 April 2026 05:45:34 +0000 (0:00:00.159) 0:03:33.063 ******** 2026-04-16 05:45:49.670355 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 05:45:49.670365 | orchestrator | 2026-04-16 05:45:49.670376 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-16 05:45:49.670387 | orchestrator | Thursday 16 April 2026 05:45:35 +0000 (0:00:00.964) 0:03:34.027 ******** 2026-04-16 05:45:49.670398 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.670419 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.670430 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.670441 | orchestrator | 2026-04-16 05:45:49.670452 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-16 05:45:49.670462 | orchestrator | Thursday 16 April 2026 05:45:35 +0000 (0:00:00.328) 0:03:34.356 ******** 2026-04-16 05:45:49.670473 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.670484 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.670494 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.670505 | orchestrator | 2026-04-16 05:45:49.670515 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-16 05:45:49.670526 | orchestrator | Thursday 16 April 2026 05:45:36 +0000 (0:00:00.520) 0:03:34.876 ******** 2026-04-16 05:45:49.670537 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:49.670548 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:49.670558 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:49.670569 | orchestrator | 2026-04-16 05:45:49.670580 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-16 05:45:49.670591 | orchestrator | Thursday 16 April 2026 05:45:38 +0000 (0:00:02.156) 0:03:37.033 ******** 2026-04-16 05:45:49.670602 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:49.670612 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:49.670623 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:49.670633 | orchestrator | 2026-04-16 05:45:49.670644 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-16 05:45:49.670655 | orchestrator | Thursday 16 April 2026 05:45:39 +0000 (0:00:00.738) 0:03:37.772 ******** 2026-04-16 05:45:49.670666 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:49.670676 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:49.670687 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:49.670698 | orchestrator | 2026-04-16 05:45:49.670708 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-16 05:45:49.670719 | orchestrator | Thursday 16 April 2026 05:45:40 +0000 (0:00:00.657) 0:03:38.429 ******** 2026-04-16 05:45:49.670730 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.670741 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.670751 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.670762 | orchestrator | 2026-04-16 05:45:49.670772 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-16 05:45:49.670783 | orchestrator | Thursday 16 April 2026 05:45:41 +0000 (0:00:00.990) 0:03:39.419 ******** 2026-04-16 05:45:49.670794 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:49.670805 | orchestrator | 2026-04-16 05:45:49.670815 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-16 05:45:49.670826 | orchestrator | Thursday 16 April 2026 05:45:42 +0000 (0:00:01.300) 0:03:40.720 ******** 2026-04-16 05:45:49.670837 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.670848 | orchestrator | 2026-04-16 05:45:49.670858 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-16 05:45:49.670869 | orchestrator | Thursday 16 April 2026 05:45:43 +0000 (0:00:00.696) 0:03:41.417 ******** 2026-04-16 05:45:49.670880 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 05:45:49.670891 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:45:49.670902 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:45:49.670912 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:45:49.670923 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-16 05:45:49.670934 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:45:49.670945 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:45:49.670955 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-04-16 05:45:49.670966 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:45:49.671016 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-16 05:45:49.671028 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-16 05:45:49.671038 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-16 05:45:49.671049 | orchestrator | 2026-04-16 05:45:49.671060 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-16 05:45:49.671071 | orchestrator | Thursday 16 April 2026 05:45:46 +0000 (0:00:03.090) 0:03:44.508 ******** 2026-04-16 05:45:49.671081 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:49.671092 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:49.671103 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:49.671113 | orchestrator | 2026-04-16 05:45:49.671130 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-16 05:45:49.671141 | orchestrator | Thursday 16 April 2026 05:45:47 +0000 (0:00:01.181) 0:03:45.689 ******** 2026-04-16 05:45:49.671152 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.671163 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.671174 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.671185 | orchestrator | 2026-04-16 05:45:49.671195 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-16 05:45:49.671206 | orchestrator | Thursday 16 April 2026 05:45:47 +0000 (0:00:00.503) 0:03:46.193 ******** 2026-04-16 05:45:49.671217 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:45:49.671228 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:45:49.671238 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:45:49.671251 | orchestrator | 2026-04-16 05:45:49.671269 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-16 05:45:49.671289 | orchestrator | Thursday 16 April 2026 05:45:48 +0000 (0:00:00.348) 0:03:46.542 ******** 2026-04-16 05:45:49.671307 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:45:49.671325 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:45:49.671343 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:45:49.671360 | orchestrator | 2026-04-16 05:45:49.671385 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-16 05:46:50.261692 | orchestrator | Thursday 16 April 2026 05:45:49 +0000 (0:00:01.472) 0:03:48.014 ******** 2026-04-16 05:46:50.261815 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:46:50.261841 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:46:50.261852 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:46:50.261862 | orchestrator | 2026-04-16 05:46:50.261873 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-16 05:46:50.261883 | orchestrator | Thursday 16 April 2026 05:45:50 +0000 (0:00:01.320) 0:03:49.335 ******** 2026-04-16 05:46:50.261894 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.261903 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:46:50.261913 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:46:50.261923 | orchestrator | 2026-04-16 05:46:50.261933 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-16 05:46:50.261989 | orchestrator | Thursday 16 April 2026 05:45:51 +0000 (0:00:00.509) 0:03:49.844 ******** 2026-04-16 05:46:50.262008 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:46:50.262089 | orchestrator | 2026-04-16 05:46:50.262108 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-16 05:46:50.262126 | orchestrator | Thursday 16 April 2026 05:45:51 +0000 (0:00:00.509) 0:03:50.354 ******** 2026-04-16 05:46:50.262144 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.262162 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:46:50.262180 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:46:50.262198 | orchestrator | 2026-04-16 05:46:50.262216 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-16 05:46:50.262229 | orchestrator | Thursday 16 April 2026 05:45:52 +0000 (0:00:00.285) 0:03:50.639 ******** 2026-04-16 05:46:50.262241 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.262279 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:46:50.262291 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:46:50.262302 | orchestrator | 2026-04-16 05:46:50.262313 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-16 05:46:50.262324 | orchestrator | Thursday 16 April 2026 05:45:52 +0000 (0:00:00.484) 0:03:51.124 ******** 2026-04-16 05:46:50.262335 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:46:50.262346 | orchestrator | 2026-04-16 05:46:50.262356 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-16 05:46:50.262366 | orchestrator | Thursday 16 April 2026 05:45:53 +0000 (0:00:00.539) 0:03:51.664 ******** 2026-04-16 05:46:50.262375 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:46:50.262385 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:46:50.262394 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:46:50.262404 | orchestrator | 2026-04-16 05:46:50.262413 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-16 05:46:50.262423 | orchestrator | Thursday 16 April 2026 05:45:54 +0000 (0:00:01.633) 0:03:53.298 ******** 2026-04-16 05:46:50.262432 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:46:50.262442 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:46:50.262451 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:46:50.262461 | orchestrator | 2026-04-16 05:46:50.262470 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-16 05:46:50.262480 | orchestrator | Thursday 16 April 2026 05:45:56 +0000 (0:00:01.470) 0:03:54.769 ******** 2026-04-16 05:46:50.262490 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:46:50.262499 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:46:50.262509 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:46:50.262518 | orchestrator | 2026-04-16 05:46:50.262528 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-16 05:46:50.262538 | orchestrator | Thursday 16 April 2026 05:45:58 +0000 (0:00:01.654) 0:03:56.424 ******** 2026-04-16 05:46:50.262547 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:46:50.262556 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:46:50.262566 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:46:50.262575 | orchestrator | 2026-04-16 05:46:50.262585 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-16 05:46:50.262595 | orchestrator | Thursday 16 April 2026 05:45:59 +0000 (0:00:01.877) 0:03:58.301 ******** 2026-04-16 05:46:50.262604 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:46:50.262614 | orchestrator | 2026-04-16 05:46:50.262624 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-16 05:46:50.262633 | orchestrator | Thursday 16 April 2026 05:46:00 +0000 (0:00:00.723) 0:03:59.025 ******** 2026-04-16 05:46:50.262658 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-16 05:46:50.262668 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:46:50.262679 | orchestrator | 2026-04-16 05:46:50.262688 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-16 05:46:50.262698 | orchestrator | Thursday 16 April 2026 05:46:22 +0000 (0:00:21.891) 0:04:20.917 ******** 2026-04-16 05:46:50.262707 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:46:50.262717 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:46:50.262727 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:46:50.262736 | orchestrator | 2026-04-16 05:46:50.262746 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-16 05:46:50.262755 | orchestrator | Thursday 16 April 2026 05:46:31 +0000 (0:00:09.124) 0:04:30.042 ******** 2026-04-16 05:46:50.262765 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.262774 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:46:50.262784 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:46:50.262793 | orchestrator | 2026-04-16 05:46:50.262810 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-16 05:46:50.262820 | orchestrator | Thursday 16 April 2026 05:46:31 +0000 (0:00:00.272) 0:04:30.314 ******** 2026-04-16 05:46:50.262850 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-16 05:46:50.262863 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-16 05:46:50.262874 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-16 05:46:50.262885 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-16 05:46:50.262895 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-16 05:46:50.262906 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__4760a2b9001748a35fedef6311082a9e6afefc32'}])  2026-04-16 05:46:50.262917 | orchestrator | 2026-04-16 05:46:50.262927 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-16 05:46:50.262937 | orchestrator | Thursday 16 April 2026 05:46:46 +0000 (0:00:14.989) 0:04:45.303 ******** 2026-04-16 05:46:50.262975 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.262985 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:46:50.262995 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:46:50.263005 | orchestrator | 2026-04-16 05:46:50.263014 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-04-16 05:46:50.263024 | orchestrator | Thursday 16 April 2026 05:46:47 +0000 (0:00:00.324) 0:04:45.628 ******** 2026-04-16 05:46:50.263033 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:46:50.263043 | orchestrator | 2026-04-16 05:46:50.263053 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-04-16 05:46:50.263062 | orchestrator | Thursday 16 April 2026 05:46:47 +0000 (0:00:00.710) 0:04:46.338 ******** 2026-04-16 05:46:50.263072 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:46:50.263081 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:46:50.263091 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:46:50.263101 | orchestrator | 2026-04-16 05:46:50.263110 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-04-16 05:46:50.263127 | orchestrator | Thursday 16 April 2026 05:46:48 +0000 (0:00:00.322) 0:04:46.661 ******** 2026-04-16 05:46:50.263142 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.263151 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:46:50.263161 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:46:50.263170 | orchestrator | 2026-04-16 05:46:50.263180 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-04-16 05:46:50.263190 | orchestrator | Thursday 16 April 2026 05:46:48 +0000 (0:00:00.308) 0:04:46.969 ******** 2026-04-16 05:46:50.263199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 05:46:50.263209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 05:46:50.263219 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 05:46:50.263228 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:46:50.263238 | orchestrator | 2026-04-16 05:46:50.263247 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-04-16 05:46:50.263257 | orchestrator | Thursday 16 April 2026 05:46:49 +0000 (0:00:00.848) 0:04:47.817 ******** 2026-04-16 05:46:50.263266 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:46:50.263276 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:46:50.263285 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:46:50.263295 | orchestrator | 2026-04-16 05:46:50.263304 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-04-16 05:46:50.263314 | orchestrator | 2026-04-16 05:46:50.263330 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:47:16.190211 | orchestrator | Thursday 16 April 2026 05:46:50 +0000 (0:00:00.787) 0:04:48.605 ******** 2026-04-16 05:47:16.190317 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:47:16.190331 | orchestrator | 2026-04-16 05:47:16.190342 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:47:16.190351 | orchestrator | Thursday 16 April 2026 05:46:50 +0000 (0:00:00.496) 0:04:49.102 ******** 2026-04-16 05:47:16.190360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:47:16.190369 | orchestrator | 2026-04-16 05:47:16.190378 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:47:16.190387 | orchestrator | Thursday 16 April 2026 05:46:51 +0000 (0:00:00.706) 0:04:49.808 ******** 2026-04-16 05:47:16.190396 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.190405 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.190414 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.190423 | orchestrator | 2026-04-16 05:47:16.190431 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:47:16.190440 | orchestrator | Thursday 16 April 2026 05:46:52 +0000 (0:00:00.692) 0:04:50.500 ******** 2026-04-16 05:47:16.190449 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.190458 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.190467 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.190476 | orchestrator | 2026-04-16 05:47:16.190484 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:47:16.190493 | orchestrator | Thursday 16 April 2026 05:46:52 +0000 (0:00:00.286) 0:04:50.787 ******** 2026-04-16 05:47:16.190502 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.190510 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.190519 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.190527 | orchestrator | 2026-04-16 05:47:16.190536 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:47:16.190545 | orchestrator | Thursday 16 April 2026 05:46:52 +0000 (0:00:00.501) 0:04:51.289 ******** 2026-04-16 05:47:16.190553 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.190563 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.190593 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.190602 | orchestrator | 2026-04-16 05:47:16.190610 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:47:16.190619 | orchestrator | Thursday 16 April 2026 05:46:53 +0000 (0:00:00.320) 0:04:51.609 ******** 2026-04-16 05:47:16.190628 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.190636 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.190645 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.190653 | orchestrator | 2026-04-16 05:47:16.190662 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:47:16.190670 | orchestrator | Thursday 16 April 2026 05:46:53 +0000 (0:00:00.706) 0:04:52.315 ******** 2026-04-16 05:47:16.190679 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.190688 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.190696 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.190705 | orchestrator | 2026-04-16 05:47:16.190713 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:47:16.190722 | orchestrator | Thursday 16 April 2026 05:46:54 +0000 (0:00:00.279) 0:04:52.595 ******** 2026-04-16 05:47:16.190730 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.190739 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.190747 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.190756 | orchestrator | 2026-04-16 05:47:16.190766 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:47:16.190777 | orchestrator | Thursday 16 April 2026 05:46:54 +0000 (0:00:00.527) 0:04:53.122 ******** 2026-04-16 05:47:16.190787 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.190797 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.190807 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.190817 | orchestrator | 2026-04-16 05:47:16.190826 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:47:16.190837 | orchestrator | Thursday 16 April 2026 05:46:55 +0000 (0:00:00.733) 0:04:53.855 ******** 2026-04-16 05:47:16.190846 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.190856 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.190866 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.190876 | orchestrator | 2026-04-16 05:47:16.190886 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:47:16.190896 | orchestrator | Thursday 16 April 2026 05:46:56 +0000 (0:00:00.741) 0:04:54.597 ******** 2026-04-16 05:47:16.190906 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.190916 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.190958 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.190980 | orchestrator | 2026-04-16 05:47:16.190990 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:47:16.191009 | orchestrator | Thursday 16 April 2026 05:46:56 +0000 (0:00:00.304) 0:04:54.902 ******** 2026-04-16 05:47:16.191018 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.191027 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.191035 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.191044 | orchestrator | 2026-04-16 05:47:16.191053 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:47:16.191061 | orchestrator | Thursday 16 April 2026 05:46:57 +0000 (0:00:00.532) 0:04:55.435 ******** 2026-04-16 05:47:16.191070 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.191079 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.191087 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.191096 | orchestrator | 2026-04-16 05:47:16.191105 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:47:16.191113 | orchestrator | Thursday 16 April 2026 05:46:57 +0000 (0:00:00.322) 0:04:55.757 ******** 2026-04-16 05:47:16.191122 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.191130 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.191139 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.191148 | orchestrator | 2026-04-16 05:47:16.191191 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:47:16.191201 | orchestrator | Thursday 16 April 2026 05:46:57 +0000 (0:00:00.300) 0:04:56.058 ******** 2026-04-16 05:47:16.191210 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.191218 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.191227 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.191236 | orchestrator | 2026-04-16 05:47:16.191244 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:47:16.191253 | orchestrator | Thursday 16 April 2026 05:46:57 +0000 (0:00:00.287) 0:04:56.345 ******** 2026-04-16 05:47:16.191261 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.191270 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.191279 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.191287 | orchestrator | 2026-04-16 05:47:16.191296 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:47:16.191304 | orchestrator | Thursday 16 April 2026 05:46:58 +0000 (0:00:00.543) 0:04:56.889 ******** 2026-04-16 05:47:16.191313 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.191321 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.191330 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.191338 | orchestrator | 2026-04-16 05:47:16.191347 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:47:16.191355 | orchestrator | Thursday 16 April 2026 05:46:58 +0000 (0:00:00.300) 0:04:57.189 ******** 2026-04-16 05:47:16.191364 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.191373 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.191381 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.191390 | orchestrator | 2026-04-16 05:47:16.191398 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:47:16.191413 | orchestrator | Thursday 16 April 2026 05:46:59 +0000 (0:00:00.308) 0:04:57.498 ******** 2026-04-16 05:47:16.191427 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.191442 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.191456 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.191469 | orchestrator | 2026-04-16 05:47:16.191483 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:47:16.191497 | orchestrator | Thursday 16 April 2026 05:46:59 +0000 (0:00:00.309) 0:04:57.807 ******** 2026-04-16 05:47:16.191510 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.191524 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.191538 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.191552 | orchestrator | 2026-04-16 05:47:16.191566 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 05:47:16.191580 | orchestrator | Thursday 16 April 2026 05:47:00 +0000 (0:00:00.740) 0:04:58.548 ******** 2026-04-16 05:47:16.191594 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 05:47:16.191608 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:47:16.191624 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:47:16.191637 | orchestrator | 2026-04-16 05:47:16.191646 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 05:47:16.191654 | orchestrator | Thursday 16 April 2026 05:47:00 +0000 (0:00:00.609) 0:04:59.157 ******** 2026-04-16 05:47:16.191663 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:47:16.191672 | orchestrator | 2026-04-16 05:47:16.191680 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-16 05:47:16.191689 | orchestrator | Thursday 16 April 2026 05:47:01 +0000 (0:00:00.493) 0:04:59.651 ******** 2026-04-16 05:47:16.191697 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:47:16.191706 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:47:16.191714 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:47:16.191723 | orchestrator | 2026-04-16 05:47:16.191731 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-16 05:47:16.191748 | orchestrator | Thursday 16 April 2026 05:47:02 +0000 (0:00:00.920) 0:05:00.571 ******** 2026-04-16 05:47:16.191756 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:47:16.191764 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:47:16.191773 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:47:16.191781 | orchestrator | 2026-04-16 05:47:16.191790 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-16 05:47:16.191798 | orchestrator | Thursday 16 April 2026 05:47:02 +0000 (0:00:00.308) 0:05:00.880 ******** 2026-04-16 05:47:16.191807 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 05:47:16.191816 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 05:47:16.191824 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 05:47:16.191833 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-16 05:47:16.191841 | orchestrator | 2026-04-16 05:47:16.191857 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-16 05:47:16.191866 | orchestrator | Thursday 16 April 2026 05:47:13 +0000 (0:00:10.892) 0:05:11.773 ******** 2026-04-16 05:47:16.191874 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:47:16.191883 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:47:16.191891 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:47:16.191899 | orchestrator | 2026-04-16 05:47:16.191908 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-16 05:47:16.191917 | orchestrator | Thursday 16 April 2026 05:47:13 +0000 (0:00:00.330) 0:05:12.104 ******** 2026-04-16 05:47:16.191925 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-16 05:47:16.191963 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 05:47:16.191972 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 05:47:16.191981 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 05:47:16.191990 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:47:16.191998 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:47:16.192007 | orchestrator | 2026-04-16 05:47:16.192015 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-16 05:47:16.192032 | orchestrator | Thursday 16 April 2026 05:47:16 +0000 (0:00:02.428) 0:05:14.532 ******** 2026-04-16 05:48:16.081341 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-16 05:48:16.081443 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 05:48:16.081459 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 05:48:16.081470 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 05:48:16.081481 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-04-16 05:48:16.081492 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-04-16 05:48:16.081503 | orchestrator | 2026-04-16 05:48:16.081515 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-16 05:48:16.081527 | orchestrator | Thursday 16 April 2026 05:47:17 +0000 (0:00:01.210) 0:05:15.743 ******** 2026-04-16 05:48:16.081538 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:48:16.081548 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:48:16.081559 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:48:16.081569 | orchestrator | 2026-04-16 05:48:16.081581 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-16 05:48:16.081592 | orchestrator | Thursday 16 April 2026 05:47:18 +0000 (0:00:00.660) 0:05:16.403 ******** 2026-04-16 05:48:16.081602 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:48:16.081614 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:48:16.081624 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:48:16.081635 | orchestrator | 2026-04-16 05:48:16.081646 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 05:48:16.081657 | orchestrator | Thursday 16 April 2026 05:47:18 +0000 (0:00:00.304) 0:05:16.708 ******** 2026-04-16 05:48:16.081667 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:48:16.081696 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:48:16.081708 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:48:16.081718 | orchestrator | 2026-04-16 05:48:16.081729 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 05:48:16.081740 | orchestrator | Thursday 16 April 2026 05:47:18 +0000 (0:00:00.512) 0:05:17.221 ******** 2026-04-16 05:48:16.081751 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:48:16.081762 | orchestrator | 2026-04-16 05:48:16.081773 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-16 05:48:16.081784 | orchestrator | Thursday 16 April 2026 05:47:19 +0000 (0:00:00.504) 0:05:17.725 ******** 2026-04-16 05:48:16.081795 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:48:16.081805 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:48:16.081816 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:48:16.081827 | orchestrator | 2026-04-16 05:48:16.081837 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-16 05:48:16.081848 | orchestrator | Thursday 16 April 2026 05:47:19 +0000 (0:00:00.313) 0:05:18.039 ******** 2026-04-16 05:48:16.081859 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:48:16.081869 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:48:16.081880 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:48:16.081891 | orchestrator | 2026-04-16 05:48:16.081904 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-16 05:48:16.081941 | orchestrator | Thursday 16 April 2026 05:47:20 +0000 (0:00:00.520) 0:05:18.559 ******** 2026-04-16 05:48:16.081955 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:48:16.081967 | orchestrator | 2026-04-16 05:48:16.081980 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-16 05:48:16.081992 | orchestrator | Thursday 16 April 2026 05:47:20 +0000 (0:00:00.526) 0:05:19.086 ******** 2026-04-16 05:48:16.082005 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:48:16.082070 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:48:16.082083 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:48:16.082096 | orchestrator | 2026-04-16 05:48:16.082109 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-16 05:48:16.082131 | orchestrator | Thursday 16 April 2026 05:47:21 +0000 (0:00:01.228) 0:05:20.315 ******** 2026-04-16 05:48:16.082144 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:48:16.082157 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:48:16.082169 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:48:16.082181 | orchestrator | 2026-04-16 05:48:16.082194 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-16 05:48:16.082206 | orchestrator | Thursday 16 April 2026 05:47:23 +0000 (0:00:01.379) 0:05:21.695 ******** 2026-04-16 05:48:16.082219 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:48:16.082231 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:48:16.082245 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:48:16.082257 | orchestrator | 2026-04-16 05:48:16.082269 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-16 05:48:16.082291 | orchestrator | Thursday 16 April 2026 05:47:25 +0000 (0:00:01.748) 0:05:23.443 ******** 2026-04-16 05:48:16.082303 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:48:16.082313 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:48:16.082324 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:48:16.082335 | orchestrator | 2026-04-16 05:48:16.082345 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 05:48:16.082356 | orchestrator | Thursday 16 April 2026 05:47:27 +0000 (0:00:01.918) 0:05:25.362 ******** 2026-04-16 05:48:16.082367 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:48:16.082378 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:48:16.082388 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-16 05:48:16.082409 | orchestrator | 2026-04-16 05:48:16.082420 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-16 05:48:16.082431 | orchestrator | Thursday 16 April 2026 05:47:27 +0000 (0:00:00.604) 0:05:25.967 ******** 2026-04-16 05:48:16.082441 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-04-16 05:48:16.082452 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-04-16 05:48:16.082479 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-04-16 05:48:16.082491 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-04-16 05:48:16.082502 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-04-16 05:48:16.082513 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:48:16.082523 | orchestrator | 2026-04-16 05:48:16.082534 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-16 05:48:16.082545 | orchestrator | Thursday 16 April 2026 05:47:57 +0000 (0:00:30.337) 0:05:56.304 ******** 2026-04-16 05:48:16.082556 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:48:16.082567 | orchestrator | 2026-04-16 05:48:16.082578 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-16 05:48:16.082588 | orchestrator | Thursday 16 April 2026 05:47:59 +0000 (0:00:01.400) 0:05:57.705 ******** 2026-04-16 05:48:16.082599 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:48:16.082610 | orchestrator | 2026-04-16 05:48:16.082621 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-16 05:48:16.082631 | orchestrator | Thursday 16 April 2026 05:47:59 +0000 (0:00:00.301) 0:05:58.006 ******** 2026-04-16 05:48:16.082642 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:48:16.082653 | orchestrator | 2026-04-16 05:48:16.082663 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-16 05:48:16.082674 | orchestrator | Thursday 16 April 2026 05:47:59 +0000 (0:00:00.146) 0:05:58.153 ******** 2026-04-16 05:48:16.082685 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-04-16 05:48:16.082696 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-04-16 05:48:16.082706 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-04-16 05:48:16.082717 | orchestrator | 2026-04-16 05:48:16.082728 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-16 05:48:16.082739 | orchestrator | Thursday 16 April 2026 05:48:06 +0000 (0:00:06.427) 0:06:04.580 ******** 2026-04-16 05:48:16.082750 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-16 05:48:16.082760 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-04-16 05:48:16.082771 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-04-16 05:48:16.082781 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-16 05:48:16.082792 | orchestrator | 2026-04-16 05:48:16.082803 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-16 05:48:16.082814 | orchestrator | Thursday 16 April 2026 05:48:11 +0000 (0:00:05.056) 0:06:09.636 ******** 2026-04-16 05:48:16.082824 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:48:16.082835 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:48:16.082846 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:48:16.082857 | orchestrator | 2026-04-16 05:48:16.082867 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-04-16 05:48:16.082878 | orchestrator | Thursday 16 April 2026 05:48:11 +0000 (0:00:00.671) 0:06:10.307 ******** 2026-04-16 05:48:16.082889 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:48:16.082921 | orchestrator | 2026-04-16 05:48:16.082933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-04-16 05:48:16.082944 | orchestrator | Thursday 16 April 2026 05:48:12 +0000 (0:00:00.502) 0:06:10.810 ******** 2026-04-16 05:48:16.082955 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:48:16.082966 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:48:16.082976 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:48:16.082987 | orchestrator | 2026-04-16 05:48:16.082997 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-04-16 05:48:16.083008 | orchestrator | Thursday 16 April 2026 05:48:12 +0000 (0:00:00.524) 0:06:11.335 ******** 2026-04-16 05:48:16.083019 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:48:16.083030 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:48:16.083040 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:48:16.083051 | orchestrator | 2026-04-16 05:48:16.083062 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-04-16 05:48:16.083072 | orchestrator | Thursday 16 April 2026 05:48:14 +0000 (0:00:01.201) 0:06:12.536 ******** 2026-04-16 05:48:16.083088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 05:48:16.083099 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 05:48:16.083110 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 05:48:16.083121 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:48:16.083131 | orchestrator | 2026-04-16 05:48:16.083142 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-04-16 05:48:16.083153 | orchestrator | Thursday 16 April 2026 05:48:14 +0000 (0:00:00.623) 0:06:13.159 ******** 2026-04-16 05:48:16.083164 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:48:16.083174 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:48:16.083185 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:48:16.083196 | orchestrator | 2026-04-16 05:48:16.083206 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-04-16 05:48:16.083217 | orchestrator | 2026-04-16 05:48:16.083228 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:48:16.083239 | orchestrator | Thursday 16 April 2026 05:48:15 +0000 (0:00:00.543) 0:06:13.703 ******** 2026-04-16 05:48:16.083250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:48:16.083262 | orchestrator | 2026-04-16 05:48:16.083273 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:48:16.083289 | orchestrator | Thursday 16 April 2026 05:48:16 +0000 (0:00:00.727) 0:06:14.431 ******** 2026-04-16 05:48:32.059173 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:48:32.059293 | orchestrator | 2026-04-16 05:48:32.059310 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:48:32.059324 | orchestrator | Thursday 16 April 2026 05:48:16 +0000 (0:00:00.684) 0:06:15.115 ******** 2026-04-16 05:48:32.059336 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.059348 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.059359 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.059370 | orchestrator | 2026-04-16 05:48:32.059381 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:48:32.059393 | orchestrator | Thursday 16 April 2026 05:48:17 +0000 (0:00:00.299) 0:06:15.414 ******** 2026-04-16 05:48:32.059404 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.059415 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.059426 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.059437 | orchestrator | 2026-04-16 05:48:32.059448 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:48:32.059459 | orchestrator | Thursday 16 April 2026 05:48:17 +0000 (0:00:00.648) 0:06:16.063 ******** 2026-04-16 05:48:32.059491 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.059502 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.059513 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.059526 | orchestrator | 2026-04-16 05:48:32.059545 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:48:32.059572 | orchestrator | Thursday 16 April 2026 05:48:18 +0000 (0:00:00.676) 0:06:16.739 ******** 2026-04-16 05:48:32.059591 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.059609 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.059626 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.059643 | orchestrator | 2026-04-16 05:48:32.059660 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:48:32.059678 | orchestrator | Thursday 16 April 2026 05:48:19 +0000 (0:00:00.920) 0:06:17.660 ******** 2026-04-16 05:48:32.059695 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.059714 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.059732 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.059750 | orchestrator | 2026-04-16 05:48:32.059769 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:48:32.059789 | orchestrator | Thursday 16 April 2026 05:48:19 +0000 (0:00:00.323) 0:06:17.983 ******** 2026-04-16 05:48:32.059807 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.059827 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.059846 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.059866 | orchestrator | 2026-04-16 05:48:32.059885 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:48:32.059933 | orchestrator | Thursday 16 April 2026 05:48:19 +0000 (0:00:00.276) 0:06:18.260 ******** 2026-04-16 05:48:32.059952 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.059972 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.059991 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.060010 | orchestrator | 2026-04-16 05:48:32.060029 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:48:32.060049 | orchestrator | Thursday 16 April 2026 05:48:20 +0000 (0:00:00.265) 0:06:18.526 ******** 2026-04-16 05:48:32.060069 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.060086 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.060104 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.060121 | orchestrator | 2026-04-16 05:48:32.060140 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:48:32.060160 | orchestrator | Thursday 16 April 2026 05:48:21 +0000 (0:00:00.888) 0:06:19.414 ******** 2026-04-16 05:48:32.060179 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.060197 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.060216 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.060234 | orchestrator | 2026-04-16 05:48:32.060253 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:48:32.060268 | orchestrator | Thursday 16 April 2026 05:48:21 +0000 (0:00:00.683) 0:06:20.098 ******** 2026-04-16 05:48:32.060279 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.060290 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.060301 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.060312 | orchestrator | 2026-04-16 05:48:32.060323 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:48:32.060334 | orchestrator | Thursday 16 April 2026 05:48:22 +0000 (0:00:00.297) 0:06:20.395 ******** 2026-04-16 05:48:32.060345 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.060356 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.060366 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.060377 | orchestrator | 2026-04-16 05:48:32.060404 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:48:32.060415 | orchestrator | Thursday 16 April 2026 05:48:22 +0000 (0:00:00.303) 0:06:20.699 ******** 2026-04-16 05:48:32.060426 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.060437 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.060459 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.060470 | orchestrator | 2026-04-16 05:48:32.060481 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:48:32.060492 | orchestrator | Thursday 16 April 2026 05:48:22 +0000 (0:00:00.571) 0:06:21.270 ******** 2026-04-16 05:48:32.060503 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.060521 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.060539 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.060558 | orchestrator | 2026-04-16 05:48:32.060575 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:48:32.060594 | orchestrator | Thursday 16 April 2026 05:48:23 +0000 (0:00:00.342) 0:06:21.613 ******** 2026-04-16 05:48:32.060613 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.060632 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.060649 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.060667 | orchestrator | 2026-04-16 05:48:32.060678 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:48:32.060689 | orchestrator | Thursday 16 April 2026 05:48:23 +0000 (0:00:00.346) 0:06:21.959 ******** 2026-04-16 05:48:32.060700 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.060732 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.060744 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.060755 | orchestrator | 2026-04-16 05:48:32.060766 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:48:32.060785 | orchestrator | Thursday 16 April 2026 05:48:23 +0000 (0:00:00.283) 0:06:22.242 ******** 2026-04-16 05:48:32.060803 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.060820 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.060837 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.060855 | orchestrator | 2026-04-16 05:48:32.060874 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:48:32.060893 | orchestrator | Thursday 16 April 2026 05:48:24 +0000 (0:00:00.528) 0:06:22.771 ******** 2026-04-16 05:48:32.060938 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.060950 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.060961 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.060971 | orchestrator | 2026-04-16 05:48:32.060996 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:48:32.061018 | orchestrator | Thursday 16 April 2026 05:48:24 +0000 (0:00:00.282) 0:06:23.053 ******** 2026-04-16 05:48:32.061029 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.061040 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.061051 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.061061 | orchestrator | 2026-04-16 05:48:32.061072 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:48:32.061083 | orchestrator | Thursday 16 April 2026 05:48:25 +0000 (0:00:00.327) 0:06:23.381 ******** 2026-04-16 05:48:32.061094 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.061104 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.061115 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.061126 | orchestrator | 2026-04-16 05:48:32.061136 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-16 05:48:32.061147 | orchestrator | Thursday 16 April 2026 05:48:25 +0000 (0:00:00.707) 0:06:24.089 ******** 2026-04-16 05:48:32.061158 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.061169 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.061187 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.061206 | orchestrator | 2026-04-16 05:48:32.061224 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-16 05:48:32.061243 | orchestrator | Thursday 16 April 2026 05:48:26 +0000 (0:00:00.306) 0:06:24.395 ******** 2026-04-16 05:48:32.061262 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:48:32.061282 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:48:32.061314 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:48:32.061335 | orchestrator | 2026-04-16 05:48:32.061356 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-16 05:48:32.061375 | orchestrator | Thursday 16 April 2026 05:48:26 +0000 (0:00:00.606) 0:06:25.001 ******** 2026-04-16 05:48:32.061394 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:48:32.061414 | orchestrator | 2026-04-16 05:48:32.061435 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-16 05:48:32.061455 | orchestrator | Thursday 16 April 2026 05:48:27 +0000 (0:00:00.467) 0:06:25.468 ******** 2026-04-16 05:48:32.061474 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.061494 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.061512 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.061531 | orchestrator | 2026-04-16 05:48:32.061550 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-16 05:48:32.061571 | orchestrator | Thursday 16 April 2026 05:48:27 +0000 (0:00:00.515) 0:06:25.984 ******** 2026-04-16 05:48:32.061592 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:48:32.061611 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:48:32.061625 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:48:32.061635 | orchestrator | 2026-04-16 05:48:32.061653 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-16 05:48:32.061672 | orchestrator | Thursday 16 April 2026 05:48:27 +0000 (0:00:00.310) 0:06:26.294 ******** 2026-04-16 05:48:32.061689 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.061707 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.061727 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.061745 | orchestrator | 2026-04-16 05:48:32.061763 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-16 05:48:32.061781 | orchestrator | Thursday 16 April 2026 05:48:28 +0000 (0:00:00.597) 0:06:26.892 ******** 2026-04-16 05:48:32.061800 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:48:32.061830 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:48:32.061850 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:48:32.061866 | orchestrator | 2026-04-16 05:48:32.061883 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-16 05:48:32.061927 | orchestrator | Thursday 16 April 2026 05:48:29 +0000 (0:00:00.514) 0:06:27.406 ******** 2026-04-16 05:48:32.061949 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-16 05:48:32.061968 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-16 05:48:32.061987 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-16 05:48:32.062007 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-16 05:48:32.062088 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-16 05:48:32.062100 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-16 05:48:32.062111 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-16 05:48:32.062138 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-16 05:49:36.636782 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-16 05:49:36.636967 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-16 05:49:36.636992 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-16 05:49:36.637009 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-16 05:49:36.637024 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-16 05:49:36.637069 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-16 05:49:36.637085 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-16 05:49:36.637101 | orchestrator | 2026-04-16 05:49:36.637116 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-16 05:49:36.637132 | orchestrator | Thursday 16 April 2026 05:48:32 +0000 (0:00:02.998) 0:06:30.405 ******** 2026-04-16 05:49:36.637147 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:49:36.637163 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:49:36.637177 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:49:36.637191 | orchestrator | 2026-04-16 05:49:36.637205 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-16 05:49:36.637219 | orchestrator | Thursday 16 April 2026 05:48:32 +0000 (0:00:00.305) 0:06:30.710 ******** 2026-04-16 05:49:36.637234 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:49:36.637251 | orchestrator | 2026-04-16 05:49:36.637267 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-16 05:49:36.637284 | orchestrator | Thursday 16 April 2026 05:48:33 +0000 (0:00:00.675) 0:06:31.386 ******** 2026-04-16 05:49:36.637302 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-16 05:49:36.637320 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-16 05:49:36.637337 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-16 05:49:36.637356 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-16 05:49:36.637375 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-16 05:49:36.637392 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-16 05:49:36.637410 | orchestrator | 2026-04-16 05:49:36.637428 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-16 05:49:36.637446 | orchestrator | Thursday 16 April 2026 05:48:34 +0000 (0:00:01.000) 0:06:32.387 ******** 2026-04-16 05:49:36.637463 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:49:36.637479 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 05:49:36.637495 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:49:36.637513 | orchestrator | 2026-04-16 05:49:36.637531 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-16 05:49:36.637549 | orchestrator | Thursday 16 April 2026 05:48:36 +0000 (0:00:02.051) 0:06:34.438 ******** 2026-04-16 05:49:36.637567 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 05:49:36.637585 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 05:49:36.637603 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:49:36.637620 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 05:49:36.637638 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 05:49:36.637654 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:49:36.637670 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 05:49:36.637685 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 05:49:36.637700 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:49:36.637714 | orchestrator | 2026-04-16 05:49:36.637729 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-16 05:49:36.637743 | orchestrator | Thursday 16 April 2026 05:48:37 +0000 (0:00:01.105) 0:06:35.544 ******** 2026-04-16 05:49:36.637758 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:49:36.637772 | orchestrator | 2026-04-16 05:49:36.637786 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-16 05:49:36.637801 | orchestrator | Thursday 16 April 2026 05:48:39 +0000 (0:00:02.019) 0:06:37.564 ******** 2026-04-16 05:49:36.637833 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:49:36.637861 | orchestrator | 2026-04-16 05:49:36.637876 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-04-16 05:49:36.637923 | orchestrator | Thursday 16 April 2026 05:48:39 +0000 (0:00:00.747) 0:06:38.312 ******** 2026-04-16 05:49:36.637938 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}) 2026-04-16 05:49:36.637954 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}) 2026-04-16 05:49:36.637997 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}) 2026-04-16 05:49:36.638115 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}) 2026-04-16 05:49:36.638173 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}) 2026-04-16 05:49:36.638190 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}) 2026-04-16 05:49:36.638205 | orchestrator | 2026-04-16 05:49:36.638220 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-16 05:49:36.638235 | orchestrator | Thursday 16 April 2026 05:49:19 +0000 (0:00:40.014) 0:07:18.327 ******** 2026-04-16 05:49:36.638251 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:49:36.638267 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:49:36.638280 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:49:36.638289 | orchestrator | 2026-04-16 05:49:36.638297 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-16 05:49:36.638306 | orchestrator | Thursday 16 April 2026 05:49:20 +0000 (0:00:00.295) 0:07:18.622 ******** 2026-04-16 05:49:36.638316 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:49:36.638324 | orchestrator | 2026-04-16 05:49:36.638333 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-16 05:49:36.638342 | orchestrator | Thursday 16 April 2026 05:49:21 +0000 (0:00:00.742) 0:07:19.364 ******** 2026-04-16 05:49:36.638351 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:49:36.638359 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:49:36.638368 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:49:36.638376 | orchestrator | 2026-04-16 05:49:36.638385 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-16 05:49:36.638394 | orchestrator | Thursday 16 April 2026 05:49:21 +0000 (0:00:00.681) 0:07:20.046 ******** 2026-04-16 05:49:36.638403 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:49:36.638412 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:49:36.638420 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:49:36.638429 | orchestrator | 2026-04-16 05:49:36.638437 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-16 05:49:36.638446 | orchestrator | Thursday 16 April 2026 05:49:24 +0000 (0:00:02.504) 0:07:22.551 ******** 2026-04-16 05:49:36.638454 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:49:36.638464 | orchestrator | 2026-04-16 05:49:36.638472 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-16 05:49:36.638481 | orchestrator | Thursday 16 April 2026 05:49:24 +0000 (0:00:00.757) 0:07:23.308 ******** 2026-04-16 05:49:36.638489 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:49:36.638498 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:49:36.638506 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:49:36.638515 | orchestrator | 2026-04-16 05:49:36.638524 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-16 05:49:36.638544 | orchestrator | Thursday 16 April 2026 05:49:26 +0000 (0:00:01.183) 0:07:24.492 ******** 2026-04-16 05:49:36.638552 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:49:36.638560 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:49:36.638567 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:49:36.638575 | orchestrator | 2026-04-16 05:49:36.638583 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-16 05:49:36.638591 | orchestrator | Thursday 16 April 2026 05:49:27 +0000 (0:00:01.115) 0:07:25.607 ******** 2026-04-16 05:49:36.638599 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:49:36.638606 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:49:36.638614 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:49:36.638621 | orchestrator | 2026-04-16 05:49:36.638629 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-16 05:49:36.638637 | orchestrator | Thursday 16 April 2026 05:49:29 +0000 (0:00:01.866) 0:07:27.474 ******** 2026-04-16 05:49:36.638645 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:49:36.638652 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:49:36.638660 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:49:36.638668 | orchestrator | 2026-04-16 05:49:36.638676 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-16 05:49:36.638683 | orchestrator | Thursday 16 April 2026 05:49:29 +0000 (0:00:00.348) 0:07:27.823 ******** 2026-04-16 05:49:36.638691 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:49:36.638699 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:49:36.638706 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:49:36.638714 | orchestrator | 2026-04-16 05:49:36.638722 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-16 05:49:36.638737 | orchestrator | Thursday 16 April 2026 05:49:29 +0000 (0:00:00.320) 0:07:28.143 ******** 2026-04-16 05:49:36.638745 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 05:49:36.638753 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-16 05:49:36.638761 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-16 05:49:36.638768 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-16 05:49:36.638776 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-16 05:49:36.638784 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-16 05:49:36.638791 | orchestrator | 2026-04-16 05:49:36.638800 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-16 05:49:36.638807 | orchestrator | Thursday 16 April 2026 05:49:30 +0000 (0:00:01.008) 0:07:29.152 ******** 2026-04-16 05:49:36.638815 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-16 05:49:36.638823 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-16 05:49:36.638831 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-16 05:49:36.638838 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-16 05:49:36.638846 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-16 05:49:36.638854 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-16 05:49:36.638862 | orchestrator | 2026-04-16 05:49:36.638870 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-16 05:49:36.638929 | orchestrator | Thursday 16 April 2026 05:49:33 +0000 (0:00:02.332) 0:07:31.484 ******** 2026-04-16 05:49:36.638940 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-16 05:49:36.638956 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-16 05:50:06.794446 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-16 05:50:06.794568 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-16 05:50:06.794584 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-16 05:50:06.794596 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-16 05:50:06.794607 | orchestrator | 2026-04-16 05:50:06.794620 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-16 05:50:06.794632 | orchestrator | Thursday 16 April 2026 05:49:36 +0000 (0:00:03.501) 0:07:34.986 ******** 2026-04-16 05:50:06.794665 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.794677 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.794688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:50:06.794699 | orchestrator | 2026-04-16 05:50:06.794709 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-16 05:50:06.794728 | orchestrator | Thursday 16 April 2026 05:49:39 +0000 (0:00:03.129) 0:07:38.115 ******** 2026-04-16 05:50:06.794747 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.794766 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.794784 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-16 05:50:06.794805 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:50:06.794824 | orchestrator | 2026-04-16 05:50:06.794842 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-16 05:50:06.794861 | orchestrator | Thursday 16 April 2026 05:49:52 +0000 (0:00:12.547) 0:07:50.662 ******** 2026-04-16 05:50:06.794909 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.794927 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.794944 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.794965 | orchestrator | 2026-04-16 05:50:06.794985 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-16 05:50:06.795005 | orchestrator | Thursday 16 April 2026 05:49:53 +0000 (0:00:01.082) 0:07:51.745 ******** 2026-04-16 05:50:06.795025 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795052 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.795073 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.795091 | orchestrator | 2026-04-16 05:50:06.795109 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-04-16 05:50:06.795127 | orchestrator | Thursday 16 April 2026 05:49:53 +0000 (0:00:00.326) 0:07:52.071 ******** 2026-04-16 05:50:06.795146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:50:06.795165 | orchestrator | 2026-04-16 05:50:06.795183 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-04-16 05:50:06.795203 | orchestrator | Thursday 16 April 2026 05:49:54 +0000 (0:00:00.768) 0:07:52.839 ******** 2026-04-16 05:50:06.795224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:50:06.795243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:50:06.795262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:50:06.795278 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795291 | orchestrator | 2026-04-16 05:50:06.795304 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-04-16 05:50:06.795316 | orchestrator | Thursday 16 April 2026 05:49:54 +0000 (0:00:00.385) 0:07:53.225 ******** 2026-04-16 05:50:06.795329 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795340 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.795351 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.795362 | orchestrator | 2026-04-16 05:50:06.795373 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-04-16 05:50:06.795384 | orchestrator | Thursday 16 April 2026 05:49:55 +0000 (0:00:00.292) 0:07:53.517 ******** 2026-04-16 05:50:06.795394 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795405 | orchestrator | 2026-04-16 05:50:06.795416 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-04-16 05:50:06.795427 | orchestrator | Thursday 16 April 2026 05:49:55 +0000 (0:00:00.213) 0:07:53.730 ******** 2026-04-16 05:50:06.795437 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795448 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.795459 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.795469 | orchestrator | 2026-04-16 05:50:06.795480 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-04-16 05:50:06.795519 | orchestrator | Thursday 16 April 2026 05:49:55 +0000 (0:00:00.531) 0:07:54.261 ******** 2026-04-16 05:50:06.795531 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795542 | orchestrator | 2026-04-16 05:50:06.795553 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-04-16 05:50:06.795563 | orchestrator | Thursday 16 April 2026 05:49:56 +0000 (0:00:00.206) 0:07:54.468 ******** 2026-04-16 05:50:06.795574 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795584 | orchestrator | 2026-04-16 05:50:06.795595 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-04-16 05:50:06.795606 | orchestrator | Thursday 16 April 2026 05:49:56 +0000 (0:00:00.223) 0:07:54.691 ******** 2026-04-16 05:50:06.795617 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795628 | orchestrator | 2026-04-16 05:50:06.795638 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-04-16 05:50:06.795649 | orchestrator | Thursday 16 April 2026 05:49:56 +0000 (0:00:00.126) 0:07:54.817 ******** 2026-04-16 05:50:06.795660 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795670 | orchestrator | 2026-04-16 05:50:06.795681 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-04-16 05:50:06.795692 | orchestrator | Thursday 16 April 2026 05:49:56 +0000 (0:00:00.227) 0:07:55.045 ******** 2026-04-16 05:50:06.795703 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795714 | orchestrator | 2026-04-16 05:50:06.795725 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-04-16 05:50:06.795736 | orchestrator | Thursday 16 April 2026 05:49:56 +0000 (0:00:00.227) 0:07:55.272 ******** 2026-04-16 05:50:06.795768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:50:06.795780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:50:06.795791 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:50:06.795802 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795812 | orchestrator | 2026-04-16 05:50:06.795823 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-04-16 05:50:06.795834 | orchestrator | Thursday 16 April 2026 05:49:57 +0000 (0:00:00.390) 0:07:55.662 ******** 2026-04-16 05:50:06.795845 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795856 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.795866 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.795904 | orchestrator | 2026-04-16 05:50:06.795915 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-04-16 05:50:06.795926 | orchestrator | Thursday 16 April 2026 05:49:57 +0000 (0:00:00.344) 0:07:56.007 ******** 2026-04-16 05:50:06.795936 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795947 | orchestrator | 2026-04-16 05:50:06.795957 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-04-16 05:50:06.795968 | orchestrator | Thursday 16 April 2026 05:49:57 +0000 (0:00:00.212) 0:07:56.220 ******** 2026-04-16 05:50:06.795979 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.795989 | orchestrator | 2026-04-16 05:50:06.796000 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-04-16 05:50:06.796011 | orchestrator | 2026-04-16 05:50:06.796022 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:50:06.796033 | orchestrator | Thursday 16 April 2026 05:49:59 +0000 (0:00:01.165) 0:07:57.385 ******** 2026-04-16 05:50:06.796044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:50:06.796057 | orchestrator | 2026-04-16 05:50:06.796068 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:50:06.796079 | orchestrator | Thursday 16 April 2026 05:50:00 +0000 (0:00:01.140) 0:07:58.526 ******** 2026-04-16 05:50:06.796090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:50:06.796109 | orchestrator | 2026-04-16 05:50:06.796120 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:50:06.796130 | orchestrator | Thursday 16 April 2026 05:50:01 +0000 (0:00:01.187) 0:07:59.714 ******** 2026-04-16 05:50:06.796141 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.796152 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.796163 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.796174 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:06.796185 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:06.796196 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:06.796206 | orchestrator | 2026-04-16 05:50:06.796217 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:50:06.796228 | orchestrator | Thursday 16 April 2026 05:50:02 +0000 (0:00:01.218) 0:08:00.932 ******** 2026-04-16 05:50:06.796239 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:06.796249 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:06.796260 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:06.796271 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:06.796281 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:06.796292 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:06.796303 | orchestrator | 2026-04-16 05:50:06.796314 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:50:06.796324 | orchestrator | Thursday 16 April 2026 05:50:03 +0000 (0:00:00.709) 0:08:01.642 ******** 2026-04-16 05:50:06.796335 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:06.796346 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:06.796357 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:06.796367 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:06.796378 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:06.796389 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:06.796399 | orchestrator | 2026-04-16 05:50:06.796410 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:50:06.796421 | orchestrator | Thursday 16 April 2026 05:50:04 +0000 (0:00:00.820) 0:08:02.462 ******** 2026-04-16 05:50:06.796431 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:06.796442 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:06.796453 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:06.796468 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:06.796480 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:06.796490 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:06.796501 | orchestrator | 2026-04-16 05:50:06.796512 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:50:06.796523 | orchestrator | Thursday 16 April 2026 05:50:04 +0000 (0:00:00.706) 0:08:03.169 ******** 2026-04-16 05:50:06.796534 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.796545 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.796555 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.796566 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:06.796577 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:06.796587 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:06.796598 | orchestrator | 2026-04-16 05:50:06.796609 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:50:06.796620 | orchestrator | Thursday 16 April 2026 05:50:06 +0000 (0:00:01.228) 0:08:04.397 ******** 2026-04-16 05:50:06.796630 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:06.796641 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:06.796652 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:06.796662 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:06.796673 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:06.796684 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:06.796694 | orchestrator | 2026-04-16 05:50:06.796705 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:50:06.796722 | orchestrator | Thursday 16 April 2026 05:50:06 +0000 (0:00:00.578) 0:08:04.975 ******** 2026-04-16 05:50:06.796740 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:36.541711 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:36.541828 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:36.541851 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.541944 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.541965 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.541984 | orchestrator | 2026-04-16 05:50:36.542004 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:50:36.542099 | orchestrator | Thursday 16 April 2026 05:50:07 +0000 (0:00:00.752) 0:08:05.727 ******** 2026-04-16 05:50:36.542114 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.542125 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.542136 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.542147 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.542158 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.542168 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.542179 | orchestrator | 2026-04-16 05:50:36.542190 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:50:36.542216 | orchestrator | Thursday 16 April 2026 05:50:08 +0000 (0:00:00.994) 0:08:06.722 ******** 2026-04-16 05:50:36.542235 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.542251 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.542263 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.542275 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.542287 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.542300 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.542312 | orchestrator | 2026-04-16 05:50:36.542324 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:50:36.542337 | orchestrator | Thursday 16 April 2026 05:50:09 +0000 (0:00:01.249) 0:08:07.971 ******** 2026-04-16 05:50:36.542351 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:36.542365 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:36.542377 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:36.542390 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.542402 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.542414 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.542427 | orchestrator | 2026-04-16 05:50:36.542440 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:50:36.542453 | orchestrator | Thursday 16 April 2026 05:50:10 +0000 (0:00:00.601) 0:08:08.572 ******** 2026-04-16 05:50:36.542467 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:36.542479 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:36.542492 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:36.542505 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.542516 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.542527 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.542537 | orchestrator | 2026-04-16 05:50:36.542548 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:50:36.542559 | orchestrator | Thursday 16 April 2026 05:50:11 +0000 (0:00:00.830) 0:08:09.403 ******** 2026-04-16 05:50:36.542570 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.542581 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.542592 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.542602 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.542613 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.542624 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.542635 | orchestrator | 2026-04-16 05:50:36.542646 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:50:36.542657 | orchestrator | Thursday 16 April 2026 05:50:11 +0000 (0:00:00.658) 0:08:10.061 ******** 2026-04-16 05:50:36.542668 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.542678 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.542689 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.542728 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.542739 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.542750 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.542761 | orchestrator | 2026-04-16 05:50:36.542771 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:50:36.542782 | orchestrator | Thursday 16 April 2026 05:50:12 +0000 (0:00:00.779) 0:08:10.841 ******** 2026-04-16 05:50:36.542792 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.542803 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.542814 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.542824 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.542835 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.542845 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.542856 | orchestrator | 2026-04-16 05:50:36.542919 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:50:36.542932 | orchestrator | Thursday 16 April 2026 05:50:13 +0000 (0:00:00.567) 0:08:11.408 ******** 2026-04-16 05:50:36.542942 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:36.542953 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:36.542964 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:36.542975 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.542985 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.542996 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.543007 | orchestrator | 2026-04-16 05:50:36.543018 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:50:36.543029 | orchestrator | Thursday 16 April 2026 05:50:13 +0000 (0:00:00.769) 0:08:12.178 ******** 2026-04-16 05:50:36.543040 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:36.543050 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:36.543061 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:36.543072 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:50:36.543082 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:50:36.543093 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:50:36.543104 | orchestrator | 2026-04-16 05:50:36.543115 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:50:36.543125 | orchestrator | Thursday 16 April 2026 05:50:14 +0000 (0:00:00.556) 0:08:12.734 ******** 2026-04-16 05:50:36.543136 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:50:36.543147 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:50:36.543157 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:50:36.543168 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.543179 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.543189 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.543200 | orchestrator | 2026-04-16 05:50:36.543211 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:50:36.543244 | orchestrator | Thursday 16 April 2026 05:50:15 +0000 (0:00:00.777) 0:08:13.512 ******** 2026-04-16 05:50:36.543255 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.543266 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.543277 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.543287 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.543298 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.543308 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.543319 | orchestrator | 2026-04-16 05:50:36.543330 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:50:36.543341 | orchestrator | Thursday 16 April 2026 05:50:15 +0000 (0:00:00.584) 0:08:14.096 ******** 2026-04-16 05:50:36.543390 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.543402 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.543413 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.543424 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.543434 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.543445 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.543456 | orchestrator | 2026-04-16 05:50:36.543467 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-16 05:50:36.543487 | orchestrator | Thursday 16 April 2026 05:50:16 +0000 (0:00:01.229) 0:08:15.326 ******** 2026-04-16 05:50:36.543498 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:50:36.543509 | orchestrator | 2026-04-16 05:50:36.543520 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-16 05:50:36.543531 | orchestrator | Thursday 16 April 2026 05:50:21 +0000 (0:00:04.064) 0:08:19.390 ******** 2026-04-16 05:50:36.543541 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:50:36.543552 | orchestrator | 2026-04-16 05:50:36.543563 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-16 05:50:36.543574 | orchestrator | Thursday 16 April 2026 05:50:23 +0000 (0:00:02.312) 0:08:21.703 ******** 2026-04-16 05:50:36.543584 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:50:36.543595 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:50:36.543606 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:50:36.543617 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.543627 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:50:36.543638 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:50:36.543649 | orchestrator | 2026-04-16 05:50:36.543659 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-16 05:50:36.543670 | orchestrator | Thursday 16 April 2026 05:50:25 +0000 (0:00:01.727) 0:08:23.431 ******** 2026-04-16 05:50:36.543681 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:50:36.543692 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:50:36.543702 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:50:36.543713 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:50:36.543723 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:50:36.543734 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:50:36.543744 | orchestrator | 2026-04-16 05:50:36.543755 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-16 05:50:36.543766 | orchestrator | Thursday 16 April 2026 05:50:26 +0000 (0:00:01.125) 0:08:24.556 ******** 2026-04-16 05:50:36.543778 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:50:36.543790 | orchestrator | 2026-04-16 05:50:36.543801 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-16 05:50:36.543812 | orchestrator | Thursday 16 April 2026 05:50:27 +0000 (0:00:01.166) 0:08:25.723 ******** 2026-04-16 05:50:36.543822 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:50:36.543833 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:50:36.543844 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:50:36.543854 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:50:36.543892 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:50:36.543912 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:50:36.543930 | orchestrator | 2026-04-16 05:50:36.543953 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-16 05:50:36.543979 | orchestrator | Thursday 16 April 2026 05:50:28 +0000 (0:00:01.531) 0:08:27.254 ******** 2026-04-16 05:50:36.543997 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:50:36.544014 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:50:36.544031 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:50:36.544049 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:50:36.544066 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:50:36.544084 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:50:36.544104 | orchestrator | 2026-04-16 05:50:36.544123 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-04-16 05:50:36.544141 | orchestrator | Thursday 16 April 2026 05:50:32 +0000 (0:00:03.412) 0:08:30.666 ******** 2026-04-16 05:50:36.544164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:50:36.544176 | orchestrator | 2026-04-16 05:50:36.544197 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-04-16 05:50:36.544208 | orchestrator | Thursday 16 April 2026 05:50:33 +0000 (0:00:01.246) 0:08:31.912 ******** 2026-04-16 05:50:36.544219 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:50:36.544229 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:50:36.544240 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:50:36.544251 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:50:36.544261 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:50:36.544272 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:50:36.544283 | orchestrator | 2026-04-16 05:50:36.544294 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-04-16 05:50:36.544305 | orchestrator | Thursday 16 April 2026 05:50:34 +0000 (0:00:00.596) 0:08:32.509 ******** 2026-04-16 05:50:36.544315 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:50:36.544326 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:50:36.544337 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:50:36.544348 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:50:36.544358 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:50:36.544369 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:50:36.544379 | orchestrator | 2026-04-16 05:50:36.544390 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-04-16 05:50:36.544412 | orchestrator | Thursday 16 April 2026 05:50:36 +0000 (0:00:02.375) 0:08:34.885 ******** 2026-04-16 05:51:03.127118 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.127239 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.127254 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.127265 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:51:03.127277 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:51:03.127287 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:51:03.127299 | orchestrator | 2026-04-16 05:51:03.127312 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-04-16 05:51:03.127325 | orchestrator | 2026-04-16 05:51:03.127337 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:51:03.127348 | orchestrator | Thursday 16 April 2026 05:50:37 +0000 (0:00:00.809) 0:08:35.694 ******** 2026-04-16 05:51:03.127360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:03.127373 | orchestrator | 2026-04-16 05:51:03.127384 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:51:03.127395 | orchestrator | Thursday 16 April 2026 05:50:38 +0000 (0:00:00.707) 0:08:36.402 ******** 2026-04-16 05:51:03.127406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:03.127417 | orchestrator | 2026-04-16 05:51:03.127428 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:51:03.127439 | orchestrator | Thursday 16 April 2026 05:50:38 +0000 (0:00:00.488) 0:08:36.891 ******** 2026-04-16 05:51:03.127450 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.127461 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.127472 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.127483 | orchestrator | 2026-04-16 05:51:03.127494 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:51:03.127505 | orchestrator | Thursday 16 April 2026 05:50:39 +0000 (0:00:00.521) 0:08:37.412 ******** 2026-04-16 05:51:03.127516 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.127527 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.127538 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.127549 | orchestrator | 2026-04-16 05:51:03.127560 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:51:03.127571 | orchestrator | Thursday 16 April 2026 05:50:39 +0000 (0:00:00.686) 0:08:38.099 ******** 2026-04-16 05:51:03.127582 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.127593 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.127604 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.127639 | orchestrator | 2026-04-16 05:51:03.127651 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:51:03.127664 | orchestrator | Thursday 16 April 2026 05:50:40 +0000 (0:00:00.693) 0:08:38.792 ******** 2026-04-16 05:51:03.127677 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.127689 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.127702 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.127715 | orchestrator | 2026-04-16 05:51:03.127728 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:51:03.127741 | orchestrator | Thursday 16 April 2026 05:50:41 +0000 (0:00:00.677) 0:08:39.470 ******** 2026-04-16 05:51:03.127753 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.127766 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.127779 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.127792 | orchestrator | 2026-04-16 05:51:03.127804 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:51:03.127818 | orchestrator | Thursday 16 April 2026 05:50:41 +0000 (0:00:00.603) 0:08:40.073 ******** 2026-04-16 05:51:03.127831 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.127843 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.127856 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.127907 | orchestrator | 2026-04-16 05:51:03.127920 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:51:03.127933 | orchestrator | Thursday 16 April 2026 05:50:42 +0000 (0:00:00.303) 0:08:40.376 ******** 2026-04-16 05:51:03.127945 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.127958 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.127970 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.127982 | orchestrator | 2026-04-16 05:51:03.127995 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:51:03.128008 | orchestrator | Thursday 16 April 2026 05:50:42 +0000 (0:00:00.298) 0:08:40.675 ******** 2026-04-16 05:51:03.128021 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128034 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128047 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128059 | orchestrator | 2026-04-16 05:51:03.128085 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:51:03.128096 | orchestrator | Thursday 16 April 2026 05:50:43 +0000 (0:00:00.932) 0:08:41.608 ******** 2026-04-16 05:51:03.128107 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128118 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128129 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128139 | orchestrator | 2026-04-16 05:51:03.128150 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:51:03.128161 | orchestrator | Thursday 16 April 2026 05:50:43 +0000 (0:00:00.696) 0:08:42.304 ******** 2026-04-16 05:51:03.128172 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.128183 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.128194 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.128205 | orchestrator | 2026-04-16 05:51:03.128216 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:51:03.128226 | orchestrator | Thursday 16 April 2026 05:50:44 +0000 (0:00:00.310) 0:08:42.614 ******** 2026-04-16 05:51:03.128237 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.128248 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.128259 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.128269 | orchestrator | 2026-04-16 05:51:03.128280 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:51:03.128291 | orchestrator | Thursday 16 April 2026 05:50:44 +0000 (0:00:00.306) 0:08:42.921 ******** 2026-04-16 05:51:03.128302 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128313 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128323 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128334 | orchestrator | 2026-04-16 05:51:03.128363 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:51:03.128383 | orchestrator | Thursday 16 April 2026 05:50:45 +0000 (0:00:00.545) 0:08:43.466 ******** 2026-04-16 05:51:03.128394 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128404 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128415 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128426 | orchestrator | 2026-04-16 05:51:03.128437 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:51:03.128448 | orchestrator | Thursday 16 April 2026 05:50:45 +0000 (0:00:00.331) 0:08:43.797 ******** 2026-04-16 05:51:03.128458 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128469 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128480 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128490 | orchestrator | 2026-04-16 05:51:03.128501 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:51:03.128512 | orchestrator | Thursday 16 April 2026 05:50:45 +0000 (0:00:00.335) 0:08:44.133 ******** 2026-04-16 05:51:03.128523 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.128534 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.128544 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.128555 | orchestrator | 2026-04-16 05:51:03.128566 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:51:03.128577 | orchestrator | Thursday 16 April 2026 05:50:46 +0000 (0:00:00.284) 0:08:44.418 ******** 2026-04-16 05:51:03.128588 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.128599 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.128610 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.128620 | orchestrator | 2026-04-16 05:51:03.128631 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:51:03.128642 | orchestrator | Thursday 16 April 2026 05:50:46 +0000 (0:00:00.499) 0:08:44.917 ******** 2026-04-16 05:51:03.128653 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.128663 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.128674 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.128685 | orchestrator | 2026-04-16 05:51:03.128696 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:51:03.128706 | orchestrator | Thursday 16 April 2026 05:50:46 +0000 (0:00:00.296) 0:08:45.214 ******** 2026-04-16 05:51:03.128717 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128728 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128739 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128749 | orchestrator | 2026-04-16 05:51:03.128760 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:51:03.128771 | orchestrator | Thursday 16 April 2026 05:50:47 +0000 (0:00:00.325) 0:08:45.539 ******** 2026-04-16 05:51:03.128782 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:03.128792 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:03.128803 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:03.128814 | orchestrator | 2026-04-16 05:51:03.128824 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-16 05:51:03.128835 | orchestrator | Thursday 16 April 2026 05:50:47 +0000 (0:00:00.727) 0:08:46.267 ******** 2026-04-16 05:51:03.128846 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:03.128879 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:03.128898 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-04-16 05:51:03.128910 | orchestrator | 2026-04-16 05:51:03.128921 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-04-16 05:51:03.128932 | orchestrator | Thursday 16 April 2026 05:50:48 +0000 (0:00:00.405) 0:08:46.672 ******** 2026-04-16 05:51:03.128943 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:51:03.128954 | orchestrator | 2026-04-16 05:51:03.128964 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-04-16 05:51:03.128975 | orchestrator | Thursday 16 April 2026 05:50:50 +0000 (0:00:02.239) 0:08:48.912 ******** 2026-04-16 05:51:03.128995 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-04-16 05:51:03.129008 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:03.129019 | orchestrator | 2026-04-16 05:51:03.129030 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-04-16 05:51:03.129041 | orchestrator | Thursday 16 April 2026 05:50:50 +0000 (0:00:00.214) 0:08:49.126 ******** 2026-04-16 05:51:03.129061 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:51:03.129081 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:51:03.129092 | orchestrator | 2026-04-16 05:51:03.129103 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-04-16 05:51:03.129114 | orchestrator | Thursday 16 April 2026 05:50:58 +0000 (0:00:07.990) 0:08:57.117 ******** 2026-04-16 05:51:03.129125 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 05:51:03.129136 | orchestrator | 2026-04-16 05:51:03.129147 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-16 05:51:03.129158 | orchestrator | Thursday 16 April 2026 05:51:02 +0000 (0:00:03.591) 0:09:00.708 ******** 2026-04-16 05:51:03.129168 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:03.129179 | orchestrator | 2026-04-16 05:51:03.129198 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-16 05:51:29.227798 | orchestrator | Thursday 16 April 2026 05:51:03 +0000 (0:00:00.765) 0:09:01.474 ******** 2026-04-16 05:51:29.227966 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-16 05:51:29.227981 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-16 05:51:29.227991 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-16 05:51:29.228000 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-16 05:51:29.228010 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-16 05:51:29.228019 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-16 05:51:29.228028 | orchestrator | 2026-04-16 05:51:29.228038 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-16 05:51:29.228046 | orchestrator | Thursday 16 April 2026 05:51:04 +0000 (0:00:01.063) 0:09:02.538 ******** 2026-04-16 05:51:29.228055 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:29.228065 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 05:51:29.228074 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:51:29.228083 | orchestrator | 2026-04-16 05:51:29.228091 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-16 05:51:29.228100 | orchestrator | Thursday 16 April 2026 05:51:06 +0000 (0:00:02.141) 0:09:04.679 ******** 2026-04-16 05:51:29.228110 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 05:51:29.228119 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 05:51:29.228128 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228138 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 05:51:29.228147 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 05:51:29.228155 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228186 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 05:51:29.228195 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 05:51:29.228204 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228212 | orchestrator | 2026-04-16 05:51:29.228221 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-16 05:51:29.228229 | orchestrator | Thursday 16 April 2026 05:51:07 +0000 (0:00:01.193) 0:09:05.872 ******** 2026-04-16 05:51:29.228238 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228265 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228274 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228283 | orchestrator | 2026-04-16 05:51:29.228291 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-16 05:51:29.228299 | orchestrator | Thursday 16 April 2026 05:51:10 +0000 (0:00:02.898) 0:09:08.771 ******** 2026-04-16 05:51:29.228308 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:29.228316 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:29.228325 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:29.228334 | orchestrator | 2026-04-16 05:51:29.228344 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-16 05:51:29.228353 | orchestrator | Thursday 16 April 2026 05:51:10 +0000 (0:00:00.308) 0:09:09.079 ******** 2026-04-16 05:51:29.228363 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:29.228373 | orchestrator | 2026-04-16 05:51:29.228382 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-16 05:51:29.228391 | orchestrator | Thursday 16 April 2026 05:51:11 +0000 (0:00:00.524) 0:09:09.604 ******** 2026-04-16 05:51:29.228401 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:29.228411 | orchestrator | 2026-04-16 05:51:29.228420 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-16 05:51:29.228430 | orchestrator | Thursday 16 April 2026 05:51:12 +0000 (0:00:00.807) 0:09:10.411 ******** 2026-04-16 05:51:29.228439 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228448 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228458 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228468 | orchestrator | 2026-04-16 05:51:29.228491 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-16 05:51:29.228501 | orchestrator | Thursday 16 April 2026 05:51:13 +0000 (0:00:01.215) 0:09:11.626 ******** 2026-04-16 05:51:29.228510 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228519 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228529 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228538 | orchestrator | 2026-04-16 05:51:29.228547 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-16 05:51:29.228556 | orchestrator | Thursday 16 April 2026 05:51:14 +0000 (0:00:01.321) 0:09:12.947 ******** 2026-04-16 05:51:29.228566 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228576 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228585 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228594 | orchestrator | 2026-04-16 05:51:29.228604 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-16 05:51:29.228613 | orchestrator | Thursday 16 April 2026 05:51:16 +0000 (0:00:01.738) 0:09:14.685 ******** 2026-04-16 05:51:29.228623 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228632 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228642 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228652 | orchestrator | 2026-04-16 05:51:29.228662 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-16 05:51:29.228672 | orchestrator | Thursday 16 April 2026 05:51:18 +0000 (0:00:01.903) 0:09:16.589 ******** 2026-04-16 05:51:29.228681 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:29.228690 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:29.228707 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:29.228716 | orchestrator | 2026-04-16 05:51:29.228725 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-16 05:51:29.228752 | orchestrator | Thursday 16 April 2026 05:51:19 +0000 (0:00:01.459) 0:09:18.049 ******** 2026-04-16 05:51:29.228761 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228770 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228779 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228788 | orchestrator | 2026-04-16 05:51:29.228796 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-04-16 05:51:29.228805 | orchestrator | Thursday 16 April 2026 05:51:20 +0000 (0:00:00.668) 0:09:18.718 ******** 2026-04-16 05:51:29.228814 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:29.228823 | orchestrator | 2026-04-16 05:51:29.228832 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-04-16 05:51:29.228841 | orchestrator | Thursday 16 April 2026 05:51:21 +0000 (0:00:00.748) 0:09:19.466 ******** 2026-04-16 05:51:29.228850 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:29.228876 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:29.228885 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:29.228893 | orchestrator | 2026-04-16 05:51:29.228902 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-04-16 05:51:29.228911 | orchestrator | Thursday 16 April 2026 05:51:21 +0000 (0:00:00.330) 0:09:19.797 ******** 2026-04-16 05:51:29.228920 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:29.228928 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:29.228937 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:29.228945 | orchestrator | 2026-04-16 05:51:29.228955 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-04-16 05:51:29.228963 | orchestrator | Thursday 16 April 2026 05:51:22 +0000 (0:00:01.228) 0:09:21.025 ******** 2026-04-16 05:51:29.228972 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:51:29.228981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:51:29.228990 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:51:29.228999 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:29.229008 | orchestrator | 2026-04-16 05:51:29.229017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-04-16 05:51:29.229026 | orchestrator | Thursday 16 April 2026 05:51:23 +0000 (0:00:00.835) 0:09:21.861 ******** 2026-04-16 05:51:29.229035 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:29.229043 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:29.229052 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:29.229061 | orchestrator | 2026-04-16 05:51:29.229070 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-16 05:51:29.229079 | orchestrator | 2026-04-16 05:51:29.229087 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 05:51:29.229096 | orchestrator | Thursday 16 April 2026 05:51:24 +0000 (0:00:00.751) 0:09:22.612 ******** 2026-04-16 05:51:29.229104 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:29.229113 | orchestrator | 2026-04-16 05:51:29.229120 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 05:51:29.229128 | orchestrator | Thursday 16 April 2026 05:51:24 +0000 (0:00:00.479) 0:09:23.092 ******** 2026-04-16 05:51:29.229136 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:29.229144 | orchestrator | 2026-04-16 05:51:29.229152 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 05:51:29.229160 | orchestrator | Thursday 16 April 2026 05:51:25 +0000 (0:00:00.784) 0:09:23.876 ******** 2026-04-16 05:51:29.229167 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:29.229181 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:29.229188 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:29.229195 | orchestrator | 2026-04-16 05:51:29.229203 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 05:51:29.229211 | orchestrator | Thursday 16 April 2026 05:51:25 +0000 (0:00:00.360) 0:09:24.236 ******** 2026-04-16 05:51:29.229218 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:29.229226 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:29.229233 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:29.229241 | orchestrator | 2026-04-16 05:51:29.229248 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 05:51:29.229261 | orchestrator | Thursday 16 April 2026 05:51:26 +0000 (0:00:00.731) 0:09:24.967 ******** 2026-04-16 05:51:29.229268 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:29.229275 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:29.229282 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:29.229289 | orchestrator | 2026-04-16 05:51:29.229296 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 05:51:29.229303 | orchestrator | Thursday 16 April 2026 05:51:27 +0000 (0:00:00.700) 0:09:25.668 ******** 2026-04-16 05:51:29.229310 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:29.229317 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:29.229324 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:29.229332 | orchestrator | 2026-04-16 05:51:29.229340 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 05:51:29.229348 | orchestrator | Thursday 16 April 2026 05:51:28 +0000 (0:00:01.115) 0:09:26.783 ******** 2026-04-16 05:51:29.229355 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:29.229363 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:29.229371 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:29.229379 | orchestrator | 2026-04-16 05:51:29.229386 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 05:51:29.229393 | orchestrator | Thursday 16 April 2026 05:51:28 +0000 (0:00:00.319) 0:09:27.103 ******** 2026-04-16 05:51:29.229401 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:29.229408 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:29.229415 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:29.229423 | orchestrator | 2026-04-16 05:51:29.229430 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 05:51:29.229437 | orchestrator | Thursday 16 April 2026 05:51:29 +0000 (0:00:00.292) 0:09:27.396 ******** 2026-04-16 05:51:29.229452 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.047069 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.047187 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.047203 | orchestrator | 2026-04-16 05:51:51.047218 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 05:51:51.047231 | orchestrator | Thursday 16 April 2026 05:51:29 +0000 (0:00:00.657) 0:09:28.054 ******** 2026-04-16 05:51:51.047242 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.047253 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.047264 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.047275 | orchestrator | 2026-04-16 05:51:51.047286 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 05:51:51.047297 | orchestrator | Thursday 16 April 2026 05:51:30 +0000 (0:00:00.717) 0:09:28.771 ******** 2026-04-16 05:51:51.047308 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.047318 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.047329 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.047340 | orchestrator | 2026-04-16 05:51:51.047350 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 05:51:51.047362 | orchestrator | Thursday 16 April 2026 05:51:31 +0000 (0:00:00.691) 0:09:29.463 ******** 2026-04-16 05:51:51.047372 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.047383 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.047394 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.047428 | orchestrator | 2026-04-16 05:51:51.047440 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 05:51:51.047450 | orchestrator | Thursday 16 April 2026 05:51:31 +0000 (0:00:00.323) 0:09:29.787 ******** 2026-04-16 05:51:51.047461 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.047473 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.047483 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.047494 | orchestrator | 2026-04-16 05:51:51.047505 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 05:51:51.047515 | orchestrator | Thursday 16 April 2026 05:51:32 +0000 (0:00:00.583) 0:09:30.371 ******** 2026-04-16 05:51:51.047526 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.047537 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.047547 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.047558 | orchestrator | 2026-04-16 05:51:51.047569 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 05:51:51.047580 | orchestrator | Thursday 16 April 2026 05:51:32 +0000 (0:00:00.346) 0:09:30.717 ******** 2026-04-16 05:51:51.047591 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.047601 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.047612 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.047623 | orchestrator | 2026-04-16 05:51:51.047633 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 05:51:51.047644 | orchestrator | Thursday 16 April 2026 05:51:32 +0000 (0:00:00.338) 0:09:31.056 ******** 2026-04-16 05:51:51.047655 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.047665 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.047676 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.047686 | orchestrator | 2026-04-16 05:51:51.047697 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 05:51:51.047708 | orchestrator | Thursday 16 April 2026 05:51:33 +0000 (0:00:00.363) 0:09:31.419 ******** 2026-04-16 05:51:51.047719 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.047729 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.047740 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.047751 | orchestrator | 2026-04-16 05:51:51.047762 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 05:51:51.047773 | orchestrator | Thursday 16 April 2026 05:51:33 +0000 (0:00:00.568) 0:09:31.988 ******** 2026-04-16 05:51:51.047783 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.047794 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.047805 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.047815 | orchestrator | 2026-04-16 05:51:51.047826 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 05:51:51.047837 | orchestrator | Thursday 16 April 2026 05:51:33 +0000 (0:00:00.306) 0:09:32.294 ******** 2026-04-16 05:51:51.047907 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.047920 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.047931 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.047942 | orchestrator | 2026-04-16 05:51:51.047953 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 05:51:51.047965 | orchestrator | Thursday 16 April 2026 05:51:34 +0000 (0:00:00.285) 0:09:32.580 ******** 2026-04-16 05:51:51.047991 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.048002 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.048012 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.048023 | orchestrator | 2026-04-16 05:51:51.048034 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 05:51:51.048044 | orchestrator | Thursday 16 April 2026 05:51:34 +0000 (0:00:00.326) 0:09:32.907 ******** 2026-04-16 05:51:51.048055 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:51:51.048066 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:51:51.048076 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:51:51.048087 | orchestrator | 2026-04-16 05:51:51.048097 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-16 05:51:51.048117 | orchestrator | Thursday 16 April 2026 05:51:35 +0000 (0:00:00.872) 0:09:33.780 ******** 2026-04-16 05:51:51.048128 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:51.048140 | orchestrator | 2026-04-16 05:51:51.048151 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 05:51:51.048161 | orchestrator | Thursday 16 April 2026 05:51:35 +0000 (0:00:00.555) 0:09:34.335 ******** 2026-04-16 05:51:51.048172 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048183 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 05:51:51.048194 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:51:51.048204 | orchestrator | 2026-04-16 05:51:51.048215 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 05:51:51.048226 | orchestrator | Thursday 16 April 2026 05:51:38 +0000 (0:00:02.485) 0:09:36.821 ******** 2026-04-16 05:51:51.048255 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 05:51:51.048267 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 05:51:51.048277 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:51.048288 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 05:51:51.048298 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 05:51:51.048309 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:51.048319 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 05:51:51.048330 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 05:51:51.048340 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:51.048351 | orchestrator | 2026-04-16 05:51:51.048361 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-16 05:51:51.048372 | orchestrator | Thursday 16 April 2026 05:51:39 +0000 (0:00:01.452) 0:09:38.273 ******** 2026-04-16 05:51:51.048383 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:51:51.048393 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:51:51.048404 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:51:51.048414 | orchestrator | 2026-04-16 05:51:51.048425 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-16 05:51:51.048436 | orchestrator | Thursday 16 April 2026 05:51:40 +0000 (0:00:00.340) 0:09:38.613 ******** 2026-04-16 05:51:51.048446 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:51:51.048457 | orchestrator | 2026-04-16 05:51:51.048468 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-16 05:51:51.048479 | orchestrator | Thursday 16 April 2026 05:51:40 +0000 (0:00:00.545) 0:09:39.159 ******** 2026-04-16 05:51:51.048491 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 05:51:51.048504 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 05:51:51.048515 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 05:51:51.048526 | orchestrator | 2026-04-16 05:51:51.048536 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-16 05:51:51.048547 | orchestrator | Thursday 16 April 2026 05:51:42 +0000 (0:00:01.275) 0:09:40.434 ******** 2026-04-16 05:51:51.048558 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048569 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-16 05:51:51.048579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048598 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-16 05:51:51.048609 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048620 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-16 05:51:51.048630 | orchestrator | 2026-04-16 05:51:51.048641 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 05:51:51.048652 | orchestrator | Thursday 16 April 2026 05:51:46 +0000 (0:00:04.375) 0:09:44.810 ******** 2026-04-16 05:51:51.048662 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048673 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:51:51.048684 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048699 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:51:51.048710 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:51:51.048720 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:51:51.048731 | orchestrator | 2026-04-16 05:51:51.048741 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 05:51:51.048752 | orchestrator | Thursday 16 April 2026 05:51:48 +0000 (0:00:02.271) 0:09:47.081 ******** 2026-04-16 05:51:51.048763 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 05:51:51.048773 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:51:51.048784 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 05:51:51.048794 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:51:51.048805 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 05:51:51.048815 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:51:51.048826 | orchestrator | 2026-04-16 05:51:51.048837 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-16 05:51:51.048867 | orchestrator | Thursday 16 April 2026 05:51:50 +0000 (0:00:01.456) 0:09:48.538 ******** 2026-04-16 05:51:51.048878 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-16 05:51:51.048889 | orchestrator | 2026-04-16 05:51:51.048900 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-16 05:51:51.048911 | orchestrator | Thursday 16 April 2026 05:51:50 +0000 (0:00:00.240) 0:09:48.778 ******** 2026-04-16 05:51:51.048921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:51:51.048939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281601 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.281613 | orchestrator | 2026-04-16 05:52:35.281626 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-16 05:52:35.281639 | orchestrator | Thursday 16 April 2026 05:51:51 +0000 (0:00:00.617) 0:09:49.395 ******** 2026-04-16 05:52:35.281651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281715 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 05:52:35.281738 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.281749 | orchestrator | 2026-04-16 05:52:35.281760 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-16 05:52:35.281771 | orchestrator | Thursday 16 April 2026 05:51:51 +0000 (0:00:00.583) 0:09:49.979 ******** 2026-04-16 05:52:35.281782 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 05:52:35.281795 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 05:52:35.281806 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 05:52:35.281817 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 05:52:35.281828 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 05:52:35.281839 | orchestrator | 2026-04-16 05:52:35.281875 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-16 05:52:35.281886 | orchestrator | Thursday 16 April 2026 05:52:22 +0000 (0:00:31.319) 0:10:21.298 ******** 2026-04-16 05:52:35.281897 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.281908 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:35.281919 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:35.281932 | orchestrator | 2026-04-16 05:52:35.281945 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-16 05:52:35.281957 | orchestrator | Thursday 16 April 2026 05:52:23 +0000 (0:00:00.342) 0:10:21.641 ******** 2026-04-16 05:52:35.281987 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.281998 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:35.282009 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:35.282098 | orchestrator | 2026-04-16 05:52:35.282111 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-16 05:52:35.282122 | orchestrator | Thursday 16 April 2026 05:52:23 +0000 (0:00:00.302) 0:10:21.944 ******** 2026-04-16 05:52:35.282133 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:52:35.282144 | orchestrator | 2026-04-16 05:52:35.282155 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-16 05:52:35.282166 | orchestrator | Thursday 16 April 2026 05:52:24 +0000 (0:00:00.756) 0:10:22.701 ******** 2026-04-16 05:52:35.282177 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:52:35.282188 | orchestrator | 2026-04-16 05:52:35.282199 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-16 05:52:35.282209 | orchestrator | Thursday 16 April 2026 05:52:24 +0000 (0:00:00.507) 0:10:23.208 ******** 2026-04-16 05:52:35.282221 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:52:35.282232 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:52:35.282242 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:52:35.282253 | orchestrator | 2026-04-16 05:52:35.282264 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-16 05:52:35.282326 | orchestrator | Thursday 16 April 2026 05:52:26 +0000 (0:00:01.559) 0:10:24.768 ******** 2026-04-16 05:52:35.282338 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:52:35.282349 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:52:35.282359 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:52:35.282370 | orchestrator | 2026-04-16 05:52:35.282381 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-16 05:52:35.282411 | orchestrator | Thursday 16 April 2026 05:52:27 +0000 (0:00:01.148) 0:10:25.916 ******** 2026-04-16 05:52:35.282423 | orchestrator | changed: [testbed-node-3] 2026-04-16 05:52:35.282434 | orchestrator | changed: [testbed-node-4] 2026-04-16 05:52:35.282445 | orchestrator | changed: [testbed-node-5] 2026-04-16 05:52:35.282455 | orchestrator | 2026-04-16 05:52:35.282466 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-16 05:52:35.282478 | orchestrator | Thursday 16 April 2026 05:52:29 +0000 (0:00:01.759) 0:10:27.675 ******** 2026-04-16 05:52:35.282489 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 05:52:35.282500 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 05:52:35.282511 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 05:52:35.282522 | orchestrator | 2026-04-16 05:52:35.282532 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-04-16 05:52:35.282543 | orchestrator | Thursday 16 April 2026 05:52:31 +0000 (0:00:02.579) 0:10:30.255 ******** 2026-04-16 05:52:35.282554 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.282565 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:35.282575 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:35.282586 | orchestrator | 2026-04-16 05:52:35.282597 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-04-16 05:52:35.282608 | orchestrator | Thursday 16 April 2026 05:52:32 +0000 (0:00:00.352) 0:10:30.607 ******** 2026-04-16 05:52:35.282618 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:52:35.282629 | orchestrator | 2026-04-16 05:52:35.282640 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-04-16 05:52:35.282651 | orchestrator | Thursday 16 April 2026 05:52:33 +0000 (0:00:00.845) 0:10:31.453 ******** 2026-04-16 05:52:35.282662 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:35.282674 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:35.282685 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:35.282696 | orchestrator | 2026-04-16 05:52:35.282706 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-04-16 05:52:35.282717 | orchestrator | Thursday 16 April 2026 05:52:33 +0000 (0:00:00.323) 0:10:31.776 ******** 2026-04-16 05:52:35.282728 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.282738 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:35.282749 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:35.282760 | orchestrator | 2026-04-16 05:52:35.282770 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-04-16 05:52:35.282781 | orchestrator | Thursday 16 April 2026 05:52:33 +0000 (0:00:00.370) 0:10:32.147 ******** 2026-04-16 05:52:35.282792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:52:35.282804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:52:35.282814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:52:35.282825 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:35.282836 | orchestrator | 2026-04-16 05:52:35.282864 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-04-16 05:52:35.282875 | orchestrator | Thursday 16 April 2026 05:52:34 +0000 (0:00:00.923) 0:10:33.071 ******** 2026-04-16 05:52:35.282895 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:35.282906 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:35.282917 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:35.282927 | orchestrator | 2026-04-16 05:52:35.282938 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:52:35.282949 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-04-16 05:52:35.282967 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-04-16 05:52:35.282978 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-04-16 05:52:35.282989 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-04-16 05:52:35.283000 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-04-16 05:52:35.283010 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-04-16 05:52:35.283021 | orchestrator | 2026-04-16 05:52:35.283032 | orchestrator | 2026-04-16 05:52:35.283042 | orchestrator | 2026-04-16 05:52:35.283053 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:52:35.283064 | orchestrator | Thursday 16 April 2026 05:52:35 +0000 (0:00:00.544) 0:10:33.615 ******** 2026-04-16 05:52:35.283074 | orchestrator | =============================================================================== 2026-04-16 05:52:35.283085 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 65.75s 2026-04-16 05:52:35.283096 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.01s 2026-04-16 05:52:35.283107 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.32s 2026-04-16 05:52:35.283124 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.34s 2026-04-16 05:52:35.799985 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.89s 2026-04-16 05:52:35.800148 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.99s 2026-04-16 05:52:35.800166 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.55s 2026-04-16 05:52:35.800177 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.89s 2026-04-16 05:52:35.800189 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.12s 2026-04-16 05:52:35.800200 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.99s 2026-04-16 05:52:35.800214 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.53s 2026-04-16 05:52:35.801192 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.43s 2026-04-16 05:52:35.801289 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.06s 2026-04-16 05:52:35.801307 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.38s 2026-04-16 05:52:35.801321 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.06s 2026-04-16 05:52:35.801334 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.59s 2026-04-16 05:52:35.801346 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2026-04-16 05:52:35.801359 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.41s 2026-04-16 05:52:35.801371 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.13s 2026-04-16 05:52:35.801384 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.13s 2026-04-16 05:52:38.208256 | orchestrator | 2026-04-16 05:52:38 | INFO  | Task 7b0ad0e6-9d6e-4f79-be78-3ce32b93fe9a (ceph-pools) was prepared for execution. 2026-04-16 05:52:38.208390 | orchestrator | 2026-04-16 05:52:38 | INFO  | It takes a moment until task 7b0ad0e6-9d6e-4f79-be78-3ce32b93fe9a (ceph-pools) has been started and output is visible here. 2026-04-16 05:52:51.256513 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-16 05:52:51.256574 | orchestrator | 2.16.14 2026-04-16 05:52:51.256581 | orchestrator | 2026-04-16 05:52:51.256586 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-04-16 05:52:51.256592 | orchestrator | 2026-04-16 05:52:51.256596 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 05:52:51.256601 | orchestrator | Thursday 16 April 2026 05:52:42 +0000 (0:00:00.553) 0:00:00.553 ******** 2026-04-16 05:52:51.256605 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:52:51.256610 | orchestrator | 2026-04-16 05:52:51.256614 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 05:52:51.256619 | orchestrator | Thursday 16 April 2026 05:52:42 +0000 (0:00:00.548) 0:00:01.102 ******** 2026-04-16 05:52:51.256623 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256627 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256631 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256635 | orchestrator | 2026-04-16 05:52:51.256639 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 05:52:51.256643 | orchestrator | Thursday 16 April 2026 05:52:43 +0000 (0:00:00.582) 0:00:01.684 ******** 2026-04-16 05:52:51.256647 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256651 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256655 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256659 | orchestrator | 2026-04-16 05:52:51.256663 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 05:52:51.256667 | orchestrator | Thursday 16 April 2026 05:52:43 +0000 (0:00:00.258) 0:00:01.943 ******** 2026-04-16 05:52:51.256675 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256679 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256683 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256687 | orchestrator | 2026-04-16 05:52:51.256691 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 05:52:51.256695 | orchestrator | Thursday 16 April 2026 05:52:44 +0000 (0:00:00.746) 0:00:02.690 ******** 2026-04-16 05:52:51.256699 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256703 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256707 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256711 | orchestrator | 2026-04-16 05:52:51.256715 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 05:52:51.256719 | orchestrator | Thursday 16 April 2026 05:52:44 +0000 (0:00:00.286) 0:00:02.976 ******** 2026-04-16 05:52:51.256723 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256727 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256731 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256735 | orchestrator | 2026-04-16 05:52:51.256739 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 05:52:51.256743 | orchestrator | Thursday 16 April 2026 05:52:45 +0000 (0:00:00.267) 0:00:03.244 ******** 2026-04-16 05:52:51.256747 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256751 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256755 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256759 | orchestrator | 2026-04-16 05:52:51.256763 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 05:52:51.256767 | orchestrator | Thursday 16 April 2026 05:52:45 +0000 (0:00:00.292) 0:00:03.536 ******** 2026-04-16 05:52:51.256771 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:51.256775 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:51.256779 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:51.256795 | orchestrator | 2026-04-16 05:52:51.256799 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 05:52:51.256804 | orchestrator | Thursday 16 April 2026 05:52:45 +0000 (0:00:00.476) 0:00:04.012 ******** 2026-04-16 05:52:51.256808 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256812 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256815 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256819 | orchestrator | 2026-04-16 05:52:51.256823 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 05:52:51.256827 | orchestrator | Thursday 16 April 2026 05:52:46 +0000 (0:00:00.280) 0:00:04.292 ******** 2026-04-16 05:52:51.256832 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:52:51.256836 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:52:51.256854 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:52:51.256858 | orchestrator | 2026-04-16 05:52:51.256862 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 05:52:51.256865 | orchestrator | Thursday 16 April 2026 05:52:46 +0000 (0:00:00.622) 0:00:04.915 ******** 2026-04-16 05:52:51.256869 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:51.256873 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:51.256876 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:51.256880 | orchestrator | 2026-04-16 05:52:51.256884 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 05:52:51.256888 | orchestrator | Thursday 16 April 2026 05:52:47 +0000 (0:00:00.431) 0:00:05.347 ******** 2026-04-16 05:52:51.256891 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:52:51.256895 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:52:51.256899 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:52:51.256903 | orchestrator | 2026-04-16 05:52:51.256906 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 05:52:51.256910 | orchestrator | Thursday 16 April 2026 05:52:49 +0000 (0:00:02.116) 0:00:07.464 ******** 2026-04-16 05:52:51.256914 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 05:52:51.256918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 05:52:51.256922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 05:52:51.256926 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:51.256930 | orchestrator | 2026-04-16 05:52:51.256941 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 05:52:51.256945 | orchestrator | Thursday 16 April 2026 05:52:49 +0000 (0:00:00.586) 0:00:08.050 ******** 2026-04-16 05:52:51.256951 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 05:52:51.256957 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 05:52:51.256961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 05:52:51.256965 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:51.256969 | orchestrator | 2026-04-16 05:52:51.256973 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 05:52:51.256976 | orchestrator | Thursday 16 April 2026 05:52:50 +0000 (0:00:00.937) 0:00:08.987 ******** 2026-04-16 05:52:51.256988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:51.256994 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:51.256998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:51.257002 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:51.257006 | orchestrator | 2026-04-16 05:52:51.257010 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 05:52:51.257013 | orchestrator | Thursday 16 April 2026 05:52:51 +0000 (0:00:00.160) 0:00:09.148 ******** 2026-04-16 05:52:51.257019 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '7ecc09e53bd0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 05:52:48.069518', 'end': '2026-04-16 05:52:48.122197', 'delta': '0:00:00.052679', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7ecc09e53bd0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 05:52:51.257026 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'deb83ba22d33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 05:52:48.601876', 'end': '2026-04-16 05:52:48.644588', 'delta': '0:00:00.042712', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['deb83ba22d33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 05:52:51.257033 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8eb997055eb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 05:52:49.146499', 'end': '2026-04-16 05:52:49.191497', 'delta': '0:00:00.044998', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8eb997055eb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 05:52:57.734942 | orchestrator | 2026-04-16 05:52:58.082252 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 05:52:58.082386 | orchestrator | Thursday 16 April 2026 05:52:51 +0000 (0:00:00.212) 0:00:09.360 ******** 2026-04-16 05:52:58.082403 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:58.082416 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:52:58.082427 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:52:58.082438 | orchestrator | 2026-04-16 05:52:58.082450 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 05:52:58.082461 | orchestrator | Thursday 16 April 2026 05:52:51 +0000 (0:00:00.429) 0:00:09.789 ******** 2026-04-16 05:52:58.082487 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-16 05:52:58.082499 | orchestrator | 2026-04-16 05:52:58.082510 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 05:52:58.082522 | orchestrator | Thursday 16 April 2026 05:52:53 +0000 (0:00:01.672) 0:00:11.462 ******** 2026-04-16 05:52:58.082532 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082543 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.082554 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.082565 | orchestrator | 2026-04-16 05:52:58.082576 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 05:52:58.082587 | orchestrator | Thursday 16 April 2026 05:52:53 +0000 (0:00:00.271) 0:00:11.734 ******** 2026-04-16 05:52:58.082597 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082608 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.082619 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.082630 | orchestrator | 2026-04-16 05:52:58.082640 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 05:52:58.082651 | orchestrator | Thursday 16 April 2026 05:52:54 +0000 (0:00:00.768) 0:00:12.502 ******** 2026-04-16 05:52:58.082662 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082673 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.082684 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.082695 | orchestrator | 2026-04-16 05:52:58.082706 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 05:52:58.082717 | orchestrator | Thursday 16 April 2026 05:52:54 +0000 (0:00:00.277) 0:00:12.779 ******** 2026-04-16 05:52:58.082728 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:52:58.082738 | orchestrator | 2026-04-16 05:52:58.082749 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 05:52:58.082760 | orchestrator | Thursday 16 April 2026 05:52:54 +0000 (0:00:00.127) 0:00:12.907 ******** 2026-04-16 05:52:58.082771 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082781 | orchestrator | 2026-04-16 05:52:58.082792 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 05:52:58.082803 | orchestrator | Thursday 16 April 2026 05:52:55 +0000 (0:00:00.225) 0:00:13.133 ******** 2026-04-16 05:52:58.082814 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082824 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.082835 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.082872 | orchestrator | 2026-04-16 05:52:58.082883 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 05:52:58.082894 | orchestrator | Thursday 16 April 2026 05:52:55 +0000 (0:00:00.282) 0:00:13.416 ******** 2026-04-16 05:52:58.082905 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082916 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.082926 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.082937 | orchestrator | 2026-04-16 05:52:58.082948 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 05:52:58.082959 | orchestrator | Thursday 16 April 2026 05:52:55 +0000 (0:00:00.309) 0:00:13.725 ******** 2026-04-16 05:52:58.082970 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.082980 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.082991 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.083001 | orchestrator | 2026-04-16 05:52:58.083021 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 05:52:58.083032 | orchestrator | Thursday 16 April 2026 05:52:56 +0000 (0:00:00.512) 0:00:14.238 ******** 2026-04-16 05:52:58.083043 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.083054 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.083065 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.083076 | orchestrator | 2026-04-16 05:52:58.083087 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 05:52:58.083098 | orchestrator | Thursday 16 April 2026 05:52:56 +0000 (0:00:00.311) 0:00:14.549 ******** 2026-04-16 05:52:58.083109 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.083119 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.083130 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.083141 | orchestrator | 2026-04-16 05:52:58.083152 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 05:52:58.083163 | orchestrator | Thursday 16 April 2026 05:52:56 +0000 (0:00:00.312) 0:00:14.861 ******** 2026-04-16 05:52:58.083174 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.083185 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.083195 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.083206 | orchestrator | 2026-04-16 05:52:58.083216 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 05:52:58.083228 | orchestrator | Thursday 16 April 2026 05:52:57 +0000 (0:00:00.478) 0:00:15.339 ******** 2026-04-16 05:52:58.083239 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.083249 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.083260 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.083271 | orchestrator | 2026-04-16 05:52:58.083281 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 05:52:58.083292 | orchestrator | Thursday 16 April 2026 05:52:57 +0000 (0:00:00.313) 0:00:15.653 ******** 2026-04-16 05:52:58.083336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083477 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083694 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.083705 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.083814 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083826 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.083867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.083900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-04-16 05:52:58.183641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.183659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.183669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.183679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.183688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-04-16 05:52:58.183698 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:52:58.183707 | orchestrator | 2026-04-16 05:52:58.183716 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 05:52:58.183725 | orchestrator | Thursday 16 April 2026 05:52:58 +0000 (0:00:00.537) 0:00:16.191 ******** 2026-04-16 05:52:58.183744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285185 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285318 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285330 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285343 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285487 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.285521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380198 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380406 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380417 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:52:58.380427 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380436 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380453 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.380469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.476703 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.476805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.476928 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.476963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.476975 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.476985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.477015 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.477026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.477044 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686147 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:52:58.686275 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686303 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686323 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686376 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686416 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:52:58.686572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:53:08.435776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:53:08.435899 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-04-16-04-32-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-04-16 05:53:08.435922 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.435928 | orchestrator | 2026-04-16 05:53:08.435932 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 05:53:08.435938 | orchestrator | Thursday 16 April 2026 05:52:58 +0000 (0:00:00.599) 0:00:16.791 ******** 2026-04-16 05:53:08.435942 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:53:08.435946 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:53:08.436016 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:53:08.436023 | orchestrator | 2026-04-16 05:53:08.436028 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 05:53:08.436032 | orchestrator | Thursday 16 April 2026 05:52:59 +0000 (0:00:00.840) 0:00:17.631 ******** 2026-04-16 05:53:08.436036 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:53:08.436040 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:53:08.436044 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:53:08.436048 | orchestrator | 2026-04-16 05:53:08.436052 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 05:53:08.436056 | orchestrator | Thursday 16 April 2026 05:52:59 +0000 (0:00:00.333) 0:00:17.965 ******** 2026-04-16 05:53:08.436061 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:53:08.436075 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:53:08.436079 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:53:08.436083 | orchestrator | 2026-04-16 05:53:08.436095 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 05:53:08.436099 | orchestrator | Thursday 16 April 2026 05:53:00 +0000 (0:00:00.639) 0:00:18.604 ******** 2026-04-16 05:53:08.436103 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436107 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436110 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436114 | orchestrator | 2026-04-16 05:53:08.436124 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 05:53:08.436128 | orchestrator | Thursday 16 April 2026 05:53:00 +0000 (0:00:00.278) 0:00:18.883 ******** 2026-04-16 05:53:08.436132 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436136 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436139 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436143 | orchestrator | 2026-04-16 05:53:08.436147 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 05:53:08.436151 | orchestrator | Thursday 16 April 2026 05:53:01 +0000 (0:00:00.692) 0:00:19.575 ******** 2026-04-16 05:53:08.436154 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436158 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436162 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436166 | orchestrator | 2026-04-16 05:53:08.436169 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 05:53:08.436173 | orchestrator | Thursday 16 April 2026 05:53:01 +0000 (0:00:00.322) 0:00:19.898 ******** 2026-04-16 05:53:08.436177 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-16 05:53:08.436182 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-16 05:53:08.436185 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-16 05:53:08.436189 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-16 05:53:08.436193 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-16 05:53:08.436202 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-16 05:53:08.436206 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-16 05:53:08.436209 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-16 05:53:08.436213 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-16 05:53:08.436217 | orchestrator | 2026-04-16 05:53:08.436221 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 05:53:08.436225 | orchestrator | Thursday 16 April 2026 05:53:02 +0000 (0:00:01.030) 0:00:20.929 ******** 2026-04-16 05:53:08.436238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 05:53:08.436243 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 05:53:08.436247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 05:53:08.436250 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 05:53:08.436258 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 05:53:08.436262 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 05:53:08.436265 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 05:53:08.436273 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 05:53:08.436277 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 05:53:08.436280 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436284 | orchestrator | 2026-04-16 05:53:08.436288 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 05:53:08.436292 | orchestrator | Thursday 16 April 2026 05:53:03 +0000 (0:00:00.374) 0:00:21.303 ******** 2026-04-16 05:53:08.436296 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 05:53:08.436300 | orchestrator | 2026-04-16 05:53:08.436304 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 05:53:08.436310 | orchestrator | Thursday 16 April 2026 05:53:03 +0000 (0:00:00.699) 0:00:22.003 ******** 2026-04-16 05:53:08.436313 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436317 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436321 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436325 | orchestrator | 2026-04-16 05:53:08.436328 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 05:53:08.436333 | orchestrator | Thursday 16 April 2026 05:53:04 +0000 (0:00:00.301) 0:00:22.304 ******** 2026-04-16 05:53:08.436337 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436341 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436346 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436350 | orchestrator | 2026-04-16 05:53:08.436355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 05:53:08.436359 | orchestrator | Thursday 16 April 2026 05:53:04 +0000 (0:00:00.290) 0:00:22.595 ******** 2026-04-16 05:53:08.436364 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436368 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:53:08.436372 | orchestrator | skipping: [testbed-node-5] 2026-04-16 05:53:08.436377 | orchestrator | 2026-04-16 05:53:08.436381 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 05:53:08.436386 | orchestrator | Thursday 16 April 2026 05:53:04 +0000 (0:00:00.480) 0:00:23.075 ******** 2026-04-16 05:53:08.436390 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:53:08.436395 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:53:08.436399 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:53:08.436404 | orchestrator | 2026-04-16 05:53:08.436408 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 05:53:08.436412 | orchestrator | Thursday 16 April 2026 05:53:05 +0000 (0:00:00.415) 0:00:23.491 ******** 2026-04-16 05:53:08.436421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:53:08.436428 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:53:08.436433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:53:08.436437 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436441 | orchestrator | 2026-04-16 05:53:08.436446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 05:53:08.436451 | orchestrator | Thursday 16 April 2026 05:53:05 +0000 (0:00:00.381) 0:00:23.872 ******** 2026-04-16 05:53:08.436455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:53:08.436460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:53:08.436463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:53:08.436467 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436471 | orchestrator | 2026-04-16 05:53:08.436475 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 05:53:08.436478 | orchestrator | Thursday 16 April 2026 05:53:06 +0000 (0:00:00.397) 0:00:24.270 ******** 2026-04-16 05:53:08.436482 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 05:53:08.436486 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 05:53:08.436490 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 05:53:08.436493 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:53:08.436497 | orchestrator | 2026-04-16 05:53:08.436501 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 05:53:08.436504 | orchestrator | Thursday 16 April 2026 05:53:06 +0000 (0:00:00.360) 0:00:24.631 ******** 2026-04-16 05:53:08.436508 | orchestrator | ok: [testbed-node-3] 2026-04-16 05:53:08.436512 | orchestrator | ok: [testbed-node-4] 2026-04-16 05:53:08.436516 | orchestrator | ok: [testbed-node-5] 2026-04-16 05:53:08.436519 | orchestrator | 2026-04-16 05:53:08.436523 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 05:53:08.436527 | orchestrator | Thursday 16 April 2026 05:53:06 +0000 (0:00:00.339) 0:00:24.970 ******** 2026-04-16 05:53:08.436531 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 05:53:08.436535 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 05:53:08.436538 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 05:53:08.436542 | orchestrator | 2026-04-16 05:53:08.436546 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 05:53:08.436550 | orchestrator | Thursday 16 April 2026 05:53:07 +0000 (0:00:00.709) 0:00:25.679 ******** 2026-04-16 05:53:08.436554 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:53:08.436561 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:54:49.840575 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:54:49.840693 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 05:54:49.840711 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 05:54:49.840724 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 05:54:49.840735 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 05:54:49.840747 | orchestrator | 2026-04-16 05:54:49.840758 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 05:54:49.840770 | orchestrator | Thursday 16 April 2026 05:53:08 +0000 (0:00:00.859) 0:00:26.539 ******** 2026-04-16 05:54:49.840781 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 05:54:49.840792 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 05:54:49.840803 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 05:54:49.840892 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 05:54:49.840906 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 05:54:49.840916 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 05:54:49.840927 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 05:54:49.840938 | orchestrator | 2026-04-16 05:54:49.840949 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-04-16 05:54:49.840959 | orchestrator | Thursday 16 April 2026 05:53:09 +0000 (0:00:01.564) 0:00:28.104 ******** 2026-04-16 05:54:49.840970 | orchestrator | skipping: [testbed-node-3] 2026-04-16 05:54:49.840982 | orchestrator | skipping: [testbed-node-4] 2026-04-16 05:54:49.840992 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-04-16 05:54:49.841003 | orchestrator | 2026-04-16 05:54:49.841017 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-04-16 05:54:49.841035 | orchestrator | Thursday 16 April 2026 05:53:10 +0000 (0:00:00.357) 0:00:28.461 ******** 2026-04-16 05:54:49.841057 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:54:49.841086 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:54:49.841125 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:54:49.841145 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:54:49.841162 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-04-16 05:54:49.841179 | orchestrator | 2026-04-16 05:54:49.841198 | orchestrator | TASK [generate keys] *********************************************************** 2026-04-16 05:54:49.841216 | orchestrator | Thursday 16 April 2026 05:53:56 +0000 (0:00:46.116) 0:01:14.578 ******** 2026-04-16 05:54:49.841233 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841268 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841284 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841303 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841321 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841338 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-04-16 05:54:49.841356 | orchestrator | 2026-04-16 05:54:49.841373 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-04-16 05:54:49.841389 | orchestrator | Thursday 16 April 2026 05:54:20 +0000 (0:00:24.417) 0:01:38.995 ******** 2026-04-16 05:54:49.841447 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841466 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841483 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841517 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841534 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841551 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 05:54:49.841568 | orchestrator | 2026-04-16 05:54:49.841586 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-04-16 05:54:49.841604 | orchestrator | Thursday 16 April 2026 05:54:33 +0000 (0:00:12.152) 0:01:51.148 ******** 2026-04-16 05:54:49.841622 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841640 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:54:49.841658 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:54:49.841676 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841695 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:54:49.841713 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:54:49.841730 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841741 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:54:49.841752 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:54:49.841763 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841773 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:54:49.841784 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:54:49.841795 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841805 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:54:49.841816 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:54:49.841855 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 05:54:49.841866 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 05:54:49.841877 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 05:54:49.841888 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-04-16 05:54:49.841899 | orchestrator | 2026-04-16 05:54:49.841920 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:54:49.841932 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-04-16 05:54:49.841944 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-16 05:54:49.841956 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-16 05:54:49.841967 | orchestrator | 2026-04-16 05:54:49.841978 | orchestrator | 2026-04-16 05:54:49.841989 | orchestrator | 2026-04-16 05:54:49.842000 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:54:49.842088 | orchestrator | Thursday 16 April 2026 05:54:49 +0000 (0:00:16.767) 0:02:07.916 ******** 2026-04-16 05:54:49.842101 | orchestrator | =============================================================================== 2026-04-16 05:54:49.842112 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.12s 2026-04-16 05:54:49.842122 | orchestrator | generate keys ---------------------------------------------------------- 24.42s 2026-04-16 05:54:49.842133 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.77s 2026-04-16 05:54:49.842144 | orchestrator | get keys from monitors ------------------------------------------------- 12.15s 2026-04-16 05:54:49.842155 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.12s 2026-04-16 05:54:49.842165 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.67s 2026-04-16 05:54:49.842181 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.56s 2026-04-16 05:54:49.842199 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.03s 2026-04-16 05:54:49.842216 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.94s 2026-04-16 05:54:49.842236 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.86s 2026-04-16 05:54:49.842256 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.84s 2026-04-16 05:54:49.842274 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.77s 2026-04-16 05:54:49.842291 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.75s 2026-04-16 05:54:49.842314 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.71s 2026-04-16 05:54:50.115544 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2026-04-16 05:54:50.115649 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.69s 2026-04-16 05:54:50.115664 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-04-16 05:54:50.115676 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2026-04-16 05:54:50.115687 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2026-04-16 05:54:50.115698 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.59s 2026-04-16 05:54:52.402526 | orchestrator | 2026-04-16 05:54:52 | INFO  | Task 6d61d542-c1cf-4017-8868-9bbb52b0609a (copy-ceph-keys) was prepared for execution. 2026-04-16 05:54:52.402627 | orchestrator | 2026-04-16 05:54:52 | INFO  | It takes a moment until task 6d61d542-c1cf-4017-8868-9bbb52b0609a (copy-ceph-keys) has been started and output is visible here. 2026-04-16 05:55:28.031373 | orchestrator | 2026-04-16 05:55:28.031492 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-04-16 05:55:28.031515 | orchestrator | 2026-04-16 05:55:28.031534 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-04-16 05:55:28.031553 | orchestrator | Thursday 16 April 2026 05:54:56 +0000 (0:00:00.119) 0:00:00.119 ******** 2026-04-16 05:55:28.031571 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-16 05:55:28.031590 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.031606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.031623 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 05:55:28.031639 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.031656 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-16 05:55:28.031674 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-16 05:55:28.031723 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-16 05:55:28.031741 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-16 05:55:28.031758 | orchestrator | 2026-04-16 05:55:28.031777 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-04-16 05:55:28.031900 | orchestrator | Thursday 16 April 2026 05:55:00 +0000 (0:00:04.588) 0:00:04.708 ******** 2026-04-16 05:55:28.031942 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-04-16 05:55:28.031965 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.031980 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.031993 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 05:55:28.032005 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032018 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-04-16 05:55:28.032030 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-04-16 05:55:28.032042 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-04-16 05:55:28.032054 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-04-16 05:55:28.032067 | orchestrator | 2026-04-16 05:55:28.032079 | orchestrator | TASK [Create share directory] ************************************************** 2026-04-16 05:55:28.032093 | orchestrator | Thursday 16 April 2026 05:55:04 +0000 (0:00:04.221) 0:00:08.930 ******** 2026-04-16 05:55:28.032106 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-16 05:55:28.032119 | orchestrator | 2026-04-16 05:55:28.032131 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-04-16 05:55:28.032144 | orchestrator | Thursday 16 April 2026 05:55:05 +0000 (0:00:00.979) 0:00:09.910 ******** 2026-04-16 05:55:28.032156 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-04-16 05:55:28.032171 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032183 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032197 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 05:55:28.032209 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032222 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-04-16 05:55:28.032234 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-04-16 05:55:28.032246 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-04-16 05:55:28.032259 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-04-16 05:55:28.032270 | orchestrator | 2026-04-16 05:55:28.032281 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-04-16 05:55:28.032292 | orchestrator | Thursday 16 April 2026 05:55:18 +0000 (0:00:12.658) 0:00:22.569 ******** 2026-04-16 05:55:28.032303 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-04-16 05:55:28.032313 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-04-16 05:55:28.032324 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-16 05:55:28.032335 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-04-16 05:55:28.032378 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-16 05:55:28.032390 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-04-16 05:55:28.032401 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-04-16 05:55:28.032412 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-04-16 05:55:28.032422 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-04-16 05:55:28.032433 | orchestrator | 2026-04-16 05:55:28.032443 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-04-16 05:55:28.032454 | orchestrator | Thursday 16 April 2026 05:55:21 +0000 (0:00:02.872) 0:00:25.441 ******** 2026-04-16 05:55:28.032465 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-04-16 05:55:28.032476 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032487 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032497 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 05:55:28.032508 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-04-16 05:55:28.032518 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-04-16 05:55:28.032529 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-04-16 05:55:28.032539 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-04-16 05:55:28.032549 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-04-16 05:55:28.032561 | orchestrator | 2026-04-16 05:55:28.032577 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:55:28.032588 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:55:28.032600 | orchestrator | 2026-04-16 05:55:28.032611 | orchestrator | 2026-04-16 05:55:28.032621 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:55:28.032632 | orchestrator | Thursday 16 April 2026 05:55:27 +0000 (0:00:06.350) 0:00:31.791 ******** 2026-04-16 05:55:28.032642 | orchestrator | =============================================================================== 2026-04-16 05:55:28.032653 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.66s 2026-04-16 05:55:28.032663 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.35s 2026-04-16 05:55:28.032674 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.59s 2026-04-16 05:55:28.032684 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.22s 2026-04-16 05:55:28.032695 | orchestrator | Check if target directories exist --------------------------------------- 2.87s 2026-04-16 05:55:28.032705 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2026-04-16 05:55:40.377254 | orchestrator | 2026-04-16 05:55:40 | INFO  | Task 8baad881-4124-4327-94a9-9ee18aa69804 (cephclient) was prepared for execution. 2026-04-16 05:55:40.377429 | orchestrator | 2026-04-16 05:55:40 | INFO  | It takes a moment until task 8baad881-4124-4327-94a9-9ee18aa69804 (cephclient) has been started and output is visible here. 2026-04-16 05:56:38.034577 | orchestrator | 2026-04-16 05:56:38.034670 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-16 05:56:38.034679 | orchestrator | 2026-04-16 05:56:38.034686 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-16 05:56:38.034692 | orchestrator | Thursday 16 April 2026 05:55:44 +0000 (0:00:00.172) 0:00:00.173 ******** 2026-04-16 05:56:38.034698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-16 05:56:38.034723 | orchestrator | 2026-04-16 05:56:38.034729 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-16 05:56:38.034735 | orchestrator | Thursday 16 April 2026 05:55:44 +0000 (0:00:00.182) 0:00:00.355 ******** 2026-04-16 05:56:38.034741 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-16 05:56:38.034747 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-16 05:56:38.034753 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-16 05:56:38.034759 | orchestrator | 2026-04-16 05:56:38.034764 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-16 05:56:38.034770 | orchestrator | Thursday 16 April 2026 05:55:45 +0000 (0:00:01.034) 0:00:01.389 ******** 2026-04-16 05:56:38.034776 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-16 05:56:38.034818 | orchestrator | 2026-04-16 05:56:38.034824 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-16 05:56:38.034829 | orchestrator | Thursday 16 April 2026 05:55:46 +0000 (0:00:01.182) 0:00:02.571 ******** 2026-04-16 05:56:38.034835 | orchestrator | changed: [testbed-manager] 2026-04-16 05:56:38.034841 | orchestrator | 2026-04-16 05:56:38.034846 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-16 05:56:38.034851 | orchestrator | Thursday 16 April 2026 05:55:47 +0000 (0:00:00.796) 0:00:03.368 ******** 2026-04-16 05:56:38.034857 | orchestrator | changed: [testbed-manager] 2026-04-16 05:56:38.034862 | orchestrator | 2026-04-16 05:56:38.034868 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-16 05:56:38.034873 | orchestrator | Thursday 16 April 2026 05:55:48 +0000 (0:00:00.775) 0:00:04.144 ******** 2026-04-16 05:56:38.034879 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-04-16 05:56:38.034884 | orchestrator | ok: [testbed-manager] 2026-04-16 05:56:38.034890 | orchestrator | 2026-04-16 05:56:38.034895 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-16 05:56:38.034901 | orchestrator | Thursday 16 April 2026 05:56:28 +0000 (0:00:40.631) 0:00:44.775 ******** 2026-04-16 05:56:38.034906 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-04-16 05:56:38.034912 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-04-16 05:56:38.034917 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-04-16 05:56:38.034923 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-04-16 05:56:38.034928 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-04-16 05:56:38.034934 | orchestrator | 2026-04-16 05:56:38.034940 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-16 05:56:38.034945 | orchestrator | Thursday 16 April 2026 05:56:32 +0000 (0:00:03.888) 0:00:48.664 ******** 2026-04-16 05:56:38.034951 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-16 05:56:38.034956 | orchestrator | 2026-04-16 05:56:38.034961 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-16 05:56:38.034967 | orchestrator | Thursday 16 April 2026 05:56:33 +0000 (0:00:00.458) 0:00:49.122 ******** 2026-04-16 05:56:38.034972 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:56:38.034978 | orchestrator | 2026-04-16 05:56:38.034983 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-16 05:56:38.034989 | orchestrator | Thursday 16 April 2026 05:56:33 +0000 (0:00:00.132) 0:00:49.254 ******** 2026-04-16 05:56:38.034994 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:56:38.034999 | orchestrator | 2026-04-16 05:56:38.035005 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-04-16 05:56:38.035021 | orchestrator | Thursday 16 April 2026 05:56:33 +0000 (0:00:00.490) 0:00:49.744 ******** 2026-04-16 05:56:38.035027 | orchestrator | changed: [testbed-manager] 2026-04-16 05:56:38.035039 | orchestrator | 2026-04-16 05:56:38.035045 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-04-16 05:56:38.035050 | orchestrator | Thursday 16 April 2026 05:56:35 +0000 (0:00:01.347) 0:00:51.092 ******** 2026-04-16 05:56:38.035056 | orchestrator | changed: [testbed-manager] 2026-04-16 05:56:38.035061 | orchestrator | 2026-04-16 05:56:38.035067 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-04-16 05:56:38.035072 | orchestrator | Thursday 16 April 2026 05:56:35 +0000 (0:00:00.636) 0:00:51.728 ******** 2026-04-16 05:56:38.035078 | orchestrator | changed: [testbed-manager] 2026-04-16 05:56:38.035083 | orchestrator | 2026-04-16 05:56:38.035088 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-04-16 05:56:38.035094 | orchestrator | Thursday 16 April 2026 05:56:36 +0000 (0:00:00.552) 0:00:52.281 ******** 2026-04-16 05:56:38.035099 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-16 05:56:38.035107 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-16 05:56:38.035115 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-16 05:56:38.035124 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-16 05:56:38.035133 | orchestrator | 2026-04-16 05:56:38.035142 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:56:38.035153 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 05:56:38.035162 | orchestrator | 2026-04-16 05:56:38.035171 | orchestrator | 2026-04-16 05:56:38.035195 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:56:38.035201 | orchestrator | Thursday 16 April 2026 05:56:37 +0000 (0:00:01.337) 0:00:53.618 ******** 2026-04-16 05:56:38.035206 | orchestrator | =============================================================================== 2026-04-16 05:56:38.035212 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.63s 2026-04-16 05:56:38.035217 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.89s 2026-04-16 05:56:38.035222 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.35s 2026-04-16 05:56:38.035228 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.34s 2026-04-16 05:56:38.035233 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.18s 2026-04-16 05:56:38.035239 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.03s 2026-04-16 05:56:38.035244 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.80s 2026-04-16 05:56:38.035249 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.78s 2026-04-16 05:56:38.035255 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.64s 2026-04-16 05:56:38.035260 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.55s 2026-04-16 05:56:38.035265 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.49s 2026-04-16 05:56:38.035271 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2026-04-16 05:56:38.035276 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.18s 2026-04-16 05:56:38.035281 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-04-16 05:56:40.231900 | orchestrator | 2026-04-16 05:56:40 | INFO  | Task c8754a09-32fa-4082-afaa-efdbadf194ea (ceph-bootstrap-dashboard) was prepared for execution. 2026-04-16 05:56:40.232001 | orchestrator | 2026-04-16 05:56:40 | INFO  | It takes a moment until task c8754a09-32fa-4082-afaa-efdbadf194ea (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-04-16 05:57:59.984138 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-16 05:57:59.984236 | orchestrator | 2.16.14 2026-04-16 05:57:59.984252 | orchestrator | 2026-04-16 05:57:59.984265 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-04-16 05:57:59.984297 | orchestrator | 2026-04-16 05:57:59.984308 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-04-16 05:57:59.984319 | orchestrator | Thursday 16 April 2026 05:56:44 +0000 (0:00:00.254) 0:00:00.254 ******** 2026-04-16 05:57:59.984330 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984341 | orchestrator | 2026-04-16 05:57:59.984352 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-04-16 05:57:59.984363 | orchestrator | Thursday 16 April 2026 05:56:46 +0000 (0:00:01.687) 0:00:01.942 ******** 2026-04-16 05:57:59.984374 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984385 | orchestrator | 2026-04-16 05:57:59.984395 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-04-16 05:57:59.984406 | orchestrator | Thursday 16 April 2026 05:56:47 +0000 (0:00:01.028) 0:00:02.971 ******** 2026-04-16 05:57:59.984417 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984427 | orchestrator | 2026-04-16 05:57:59.984438 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-04-16 05:57:59.984449 | orchestrator | Thursday 16 April 2026 05:56:48 +0000 (0:00:01.026) 0:00:03.997 ******** 2026-04-16 05:57:59.984459 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984470 | orchestrator | 2026-04-16 05:57:59.984481 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-04-16 05:57:59.984492 | orchestrator | Thursday 16 April 2026 05:56:49 +0000 (0:00:01.101) 0:00:05.098 ******** 2026-04-16 05:57:59.984502 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984513 | orchestrator | 2026-04-16 05:57:59.984535 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-04-16 05:57:59.984547 | orchestrator | Thursday 16 April 2026 05:56:50 +0000 (0:00:00.963) 0:00:06.062 ******** 2026-04-16 05:57:59.984557 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984569 | orchestrator | 2026-04-16 05:57:59.984580 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-04-16 05:57:59.984590 | orchestrator | Thursday 16 April 2026 05:56:51 +0000 (0:00:01.008) 0:00:07.070 ******** 2026-04-16 05:57:59.984601 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984612 | orchestrator | 2026-04-16 05:57:59.984623 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-04-16 05:57:59.984633 | orchestrator | Thursday 16 April 2026 05:56:52 +0000 (0:00:01.100) 0:00:08.170 ******** 2026-04-16 05:57:59.984644 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984655 | orchestrator | 2026-04-16 05:57:59.984665 | orchestrator | TASK [Create admin user] ******************************************************* 2026-04-16 05:57:59.984676 | orchestrator | Thursday 16 April 2026 05:56:53 +0000 (0:00:01.076) 0:00:09.247 ******** 2026-04-16 05:57:59.984687 | orchestrator | changed: [testbed-manager] 2026-04-16 05:57:59.984699 | orchestrator | 2026-04-16 05:57:59.984712 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-04-16 05:57:59.984724 | orchestrator | Thursday 16 April 2026 05:57:35 +0000 (0:00:41.826) 0:00:51.074 ******** 2026-04-16 05:57:59.984736 | orchestrator | skipping: [testbed-manager] 2026-04-16 05:57:59.984748 | orchestrator | 2026-04-16 05:57:59.984789 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-16 05:57:59.984802 | orchestrator | 2026-04-16 05:57:59.984814 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-16 05:57:59.984827 | orchestrator | Thursday 16 April 2026 05:57:35 +0000 (0:00:00.171) 0:00:51.245 ******** 2026-04-16 05:57:59.984839 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:57:59.984852 | orchestrator | 2026-04-16 05:57:59.984864 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-16 05:57:59.984876 | orchestrator | 2026-04-16 05:57:59.984888 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-16 05:57:59.984902 | orchestrator | Thursday 16 April 2026 05:57:47 +0000 (0:00:11.863) 0:01:03.109 ******** 2026-04-16 05:57:59.984922 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:57:59.984935 | orchestrator | 2026-04-16 05:57:59.984947 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-04-16 05:57:59.984960 | orchestrator | 2026-04-16 05:57:59.984973 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-04-16 05:57:59.984986 | orchestrator | Thursday 16 April 2026 05:57:48 +0000 (0:00:01.149) 0:01:04.258 ******** 2026-04-16 05:57:59.984998 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:57:59.985010 | orchestrator | 2026-04-16 05:57:59.985022 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 05:57:59.985036 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 05:57:59.985050 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:57:59.985063 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:57:59.985074 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 05:57:59.985085 | orchestrator | 2026-04-16 05:57:59.985096 | orchestrator | 2026-04-16 05:57:59.985106 | orchestrator | 2026-04-16 05:57:59.985117 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 05:57:59.985128 | orchestrator | Thursday 16 April 2026 05:57:59 +0000 (0:00:11.236) 0:01:15.495 ******** 2026-04-16 05:57:59.985138 | orchestrator | =============================================================================== 2026-04-16 05:57:59.985149 | orchestrator | Create admin user ------------------------------------------------------ 41.83s 2026-04-16 05:57:59.985176 | orchestrator | Restart ceph manager service ------------------------------------------- 24.25s 2026-04-16 05:57:59.985187 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.69s 2026-04-16 05:57:59.985198 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.10s 2026-04-16 05:57:59.985209 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.10s 2026-04-16 05:57:59.985220 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.08s 2026-04-16 05:57:59.985230 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2026-04-16 05:57:59.985241 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.03s 2026-04-16 05:57:59.985251 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2026-04-16 05:57:59.985262 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.96s 2026-04-16 05:57:59.985272 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.17s 2026-04-16 05:58:00.277459 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-04-16 05:58:02.291506 | orchestrator | 2026-04-16 05:58:02 | INFO  | Task 16a3d854-634f-4e29-9ffb-696861807560 (keystone) was prepared for execution. 2026-04-16 05:58:02.291592 | orchestrator | 2026-04-16 05:58:02 | INFO  | It takes a moment until task 16a3d854-634f-4e29-9ffb-696861807560 (keystone) has been started and output is visible here. 2026-04-16 05:58:09.140471 | orchestrator | 2026-04-16 05:58:09.140550 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 05:58:09.140562 | orchestrator | 2026-04-16 05:58:09.140583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 05:58:09.140592 | orchestrator | Thursday 16 April 2026 05:58:06 +0000 (0:00:00.247) 0:00:00.247 ******** 2026-04-16 05:58:09.140601 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:58:09.140610 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:58:09.140619 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:58:09.140628 | orchestrator | 2026-04-16 05:58:09.140636 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 05:58:09.140661 | orchestrator | Thursday 16 April 2026 05:58:06 +0000 (0:00:00.299) 0:00:00.546 ******** 2026-04-16 05:58:09.140671 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-16 05:58:09.140680 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-16 05:58:09.140688 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-16 05:58:09.140697 | orchestrator | 2026-04-16 05:58:09.140705 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-16 05:58:09.140714 | orchestrator | 2026-04-16 05:58:09.140723 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 05:58:09.140732 | orchestrator | Thursday 16 April 2026 05:58:07 +0000 (0:00:00.379) 0:00:00.926 ******** 2026-04-16 05:58:09.140741 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:58:09.140786 | orchestrator | 2026-04-16 05:58:09.140796 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-16 05:58:09.140805 | orchestrator | Thursday 16 April 2026 05:58:07 +0000 (0:00:00.535) 0:00:01.461 ******** 2026-04-16 05:58:09.140818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:09.140832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:09.140862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:09.140880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:09.140890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:09.140900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:09.140909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:09.140918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:09.140927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:09.140941 | orchestrator | 2026-04-16 05:58:09.140950 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-16 05:58:09.140964 | orchestrator | Thursday 16 April 2026 05:58:09 +0000 (0:00:01.587) 0:00:03.049 ******** 2026-04-16 05:58:14.746681 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:14.746828 | orchestrator | 2026-04-16 05:58:14.746875 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-16 05:58:14.746890 | orchestrator | Thursday 16 April 2026 05:58:09 +0000 (0:00:00.314) 0:00:03.363 ******** 2026-04-16 05:58:14.746901 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:14.746912 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:14.746923 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:14.746934 | orchestrator | 2026-04-16 05:58:14.746945 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-16 05:58:14.746956 | orchestrator | Thursday 16 April 2026 05:58:09 +0000 (0:00:00.304) 0:00:03.668 ******** 2026-04-16 05:58:14.746967 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:58:14.746978 | orchestrator | 2026-04-16 05:58:14.746989 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 05:58:14.747001 | orchestrator | Thursday 16 April 2026 05:58:10 +0000 (0:00:00.798) 0:00:04.467 ******** 2026-04-16 05:58:14.747012 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 05:58:14.747023 | orchestrator | 2026-04-16 05:58:14.747034 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-16 05:58:14.747045 | orchestrator | Thursday 16 April 2026 05:58:11 +0000 (0:00:00.558) 0:00:05.025 ******** 2026-04-16 05:58:14.747061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:14.747078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:14.747091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:14.747150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:14.747165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:14.747177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:14.747189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:14.747200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:14.747218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:14.747231 | orchestrator | 2026-04-16 05:58:14.747244 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-16 05:58:14.747256 | orchestrator | Thursday 16 April 2026 05:58:14 +0000 (0:00:03.088) 0:00:08.114 ******** 2026-04-16 05:58:14.747278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:15.563887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:15.564071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:15.564100 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:15.564117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:15.564149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:15.564166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:15.564178 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:15.564210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:15.564223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:15.564234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:15.564254 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:15.564265 | orchestrator | 2026-04-16 05:58:15.564277 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-16 05:58:15.564289 | orchestrator | Thursday 16 April 2026 05:58:14 +0000 (0:00:00.549) 0:00:08.663 ******** 2026-04-16 05:58:15.564301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:15.564317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:15.564337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:18.714453 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:18.714545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:18.714566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:18.714604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:18.714617 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:18.714642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:18.714656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:18.714685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:18.714698 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:18.714709 | orchestrator | 2026-04-16 05:58:18.714721 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-16 05:58:18.714733 | orchestrator | Thursday 16 April 2026 05:58:15 +0000 (0:00:00.810) 0:00:09.474 ******** 2026-04-16 05:58:18.714745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:18.714810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:18.714829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:18.714851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:23.243734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:23.243889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 05:58:23.243907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:23.243918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:23.243942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:23.243955 | orchestrator | 2026-04-16 05:58:23.243968 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-16 05:58:23.243979 | orchestrator | Thursday 16 April 2026 05:58:18 +0000 (0:00:03.153) 0:00:12.628 ******** 2026-04-16 05:58:23.244010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:23.244033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:23.244046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:23.244058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:23.244102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:23.244124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:26.635400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:26.635513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:26.635530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 05:58:26.635542 | orchestrator | 2026-04-16 05:58:26.635555 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-16 05:58:26.635568 | orchestrator | Thursday 16 April 2026 05:58:23 +0000 (0:00:04.529) 0:00:17.157 ******** 2026-04-16 05:58:26.635580 | orchestrator | changed: [testbed-node-1] 2026-04-16 05:58:26.635591 | orchestrator | changed: [testbed-node-0] 2026-04-16 05:58:26.635602 | orchestrator | changed: [testbed-node-2] 2026-04-16 05:58:26.635613 | orchestrator | 2026-04-16 05:58:26.635624 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-16 05:58:26.635635 | orchestrator | Thursday 16 April 2026 05:58:24 +0000 (0:00:01.414) 0:00:18.572 ******** 2026-04-16 05:58:26.635646 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:26.635656 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:26.635667 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:26.635678 | orchestrator | 2026-04-16 05:58:26.635689 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-16 05:58:26.635700 | orchestrator | Thursday 16 April 2026 05:58:25 +0000 (0:00:00.686) 0:00:19.259 ******** 2026-04-16 05:58:26.635710 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:26.635738 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:26.635800 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:26.635812 | orchestrator | 2026-04-16 05:58:26.635823 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-16 05:58:26.635834 | orchestrator | Thursday 16 April 2026 05:58:25 +0000 (0:00:00.482) 0:00:19.742 ******** 2026-04-16 05:58:26.635845 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:26.635855 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:26.635866 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:26.635877 | orchestrator | 2026-04-16 05:58:26.635888 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-16 05:58:26.635899 | orchestrator | Thursday 16 April 2026 05:58:26 +0000 (0:00:00.284) 0:00:20.026 ******** 2026-04-16 05:58:26.635960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:26.635976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:26.635988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:26.635999 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:26.636012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:26.636030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:26.636053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:26.636065 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:26.636084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-04-16 05:58:44.795393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 05:58:44.795476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 05:58:44.795483 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:44.795489 | orchestrator | 2026-04-16 05:58:44.795494 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 05:58:44.795499 | orchestrator | Thursday 16 April 2026 05:58:26 +0000 (0:00:00.523) 0:00:20.550 ******** 2026-04-16 05:58:44.795503 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:44.795507 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:44.795511 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:44.795514 | orchestrator | 2026-04-16 05:58:44.795518 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-16 05:58:44.795522 | orchestrator | Thursday 16 April 2026 05:58:26 +0000 (0:00:00.296) 0:00:20.846 ******** 2026-04-16 05:58:44.795526 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-16 05:58:44.795544 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-16 05:58:44.795558 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-16 05:58:44.795562 | orchestrator | 2026-04-16 05:58:44.795566 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-16 05:58:44.795570 | orchestrator | Thursday 16 April 2026 05:58:28 +0000 (0:00:01.729) 0:00:22.576 ******** 2026-04-16 05:58:44.795573 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:58:44.795577 | orchestrator | 2026-04-16 05:58:44.795581 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-16 05:58:44.795585 | orchestrator | Thursday 16 April 2026 05:58:29 +0000 (0:00:00.862) 0:00:23.438 ******** 2026-04-16 05:58:44.795589 | orchestrator | skipping: [testbed-node-0] 2026-04-16 05:58:44.795592 | orchestrator | skipping: [testbed-node-1] 2026-04-16 05:58:44.795596 | orchestrator | skipping: [testbed-node-2] 2026-04-16 05:58:44.795600 | orchestrator | 2026-04-16 05:58:44.795604 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-16 05:58:44.795607 | orchestrator | Thursday 16 April 2026 05:58:30 +0000 (0:00:00.532) 0:00:23.970 ******** 2026-04-16 05:58:44.795611 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 05:58:44.795615 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 05:58:44.795619 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 05:58:44.795622 | orchestrator | 2026-04-16 05:58:44.795626 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-16 05:58:44.795632 | orchestrator | Thursday 16 April 2026 05:58:31 +0000 (0:00:01.033) 0:00:25.003 ******** 2026-04-16 05:58:44.795635 | orchestrator | ok: [testbed-node-0] 2026-04-16 05:58:44.795664 | orchestrator | ok: [testbed-node-1] 2026-04-16 05:58:44.795669 | orchestrator | ok: [testbed-node-2] 2026-04-16 05:58:44.795673 | orchestrator | 2026-04-16 05:58:44.795678 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-16 05:58:44.795682 | orchestrator | Thursday 16 April 2026 05:58:31 +0000 (0:00:00.462) 0:00:25.466 ******** 2026-04-16 05:58:44.795686 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-16 05:58:44.795691 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-16 05:58:44.795695 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-16 05:58:44.795699 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-16 05:58:44.795703 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-16 05:58:44.795707 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-16 05:58:44.795711 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-16 05:58:44.795716 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-16 05:58:44.795731 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-16 05:58:44.795736 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-16 05:58:44.795740 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-16 05:58:44.795765 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-16 05:58:44.795771 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-16 05:58:44.795775 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-16 05:58:44.795779 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-16 05:58:44.795787 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 05:58:44.795791 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 05:58:44.795795 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 05:58:44.795799 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 05:58:44.795803 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 05:58:44.795808 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 05:58:44.795814 | orchestrator | 2026-04-16 05:58:44.795820 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-16 05:58:44.795826 | orchestrator | Thursday 16 April 2026 05:58:39 +0000 (0:00:08.441) 0:00:33.907 ******** 2026-04-16 05:58:44.795832 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 05:58:44.795837 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 05:58:44.795843 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 05:58:44.795849 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 05:58:44.795854 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 05:58:44.795860 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 05:58:44.795867 | orchestrator | 2026-04-16 05:58:44.795877 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-04-16 05:58:44.795884 | orchestrator | Thursday 16 April 2026 05:58:42 +0000 (0:00:02.505) 0:00:36.412 ******** 2026-04-16 05:58:44.795893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 05:58:44.795908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 06:00:15.091352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-04-16 06:00:15.091469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 06:00:15.091503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 06:00:15.091517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 06:00:15.091529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 06:00:15.091559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 06:00:15.091596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 06:00:15.091610 | orchestrator | 2026-04-16 06:00:15.091622 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 06:00:15.091635 | orchestrator | Thursday 16 April 2026 05:58:44 +0000 (0:00:02.294) 0:00:38.707 ******** 2026-04-16 06:00:15.091646 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:00:15.091658 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:00:15.091669 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:00:15.091680 | orchestrator | 2026-04-16 06:00:15.091691 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-04-16 06:00:15.091702 | orchestrator | Thursday 16 April 2026 05:58:45 +0000 (0:00:00.445) 0:00:39.153 ******** 2026-04-16 06:00:15.091713 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.091778 | orchestrator | 2026-04-16 06:00:15.091791 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-04-16 06:00:15.091802 | orchestrator | Thursday 16 April 2026 05:58:47 +0000 (0:00:02.284) 0:00:41.437 ******** 2026-04-16 06:00:15.091813 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.091824 | orchestrator | 2026-04-16 06:00:15.091835 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-04-16 06:00:15.091845 | orchestrator | Thursday 16 April 2026 05:58:49 +0000 (0:00:02.150) 0:00:43.588 ******** 2026-04-16 06:00:15.091856 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:00:15.091868 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:00:15.091880 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:00:15.091893 | orchestrator | 2026-04-16 06:00:15.091907 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-04-16 06:00:15.091920 | orchestrator | Thursday 16 April 2026 05:58:50 +0000 (0:00:00.844) 0:00:44.432 ******** 2026-04-16 06:00:15.091934 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:00:15.091946 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:00:15.091959 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:00:15.091973 | orchestrator | 2026-04-16 06:00:15.091993 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-04-16 06:00:15.092007 | orchestrator | Thursday 16 April 2026 05:58:50 +0000 (0:00:00.294) 0:00:44.727 ******** 2026-04-16 06:00:15.092021 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:00:15.092034 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:00:15.092047 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:00:15.092061 | orchestrator | 2026-04-16 06:00:15.092074 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-04-16 06:00:15.092087 | orchestrator | Thursday 16 April 2026 05:58:51 +0000 (0:00:00.498) 0:00:45.226 ******** 2026-04-16 06:00:15.092100 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.092113 | orchestrator | 2026-04-16 06:00:15.092126 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-04-16 06:00:15.092140 | orchestrator | Thursday 16 April 2026 05:59:06 +0000 (0:00:14.740) 0:00:59.966 ******** 2026-04-16 06:00:15.092152 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.092165 | orchestrator | 2026-04-16 06:00:15.092179 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-16 06:00:15.092202 | orchestrator | Thursday 16 April 2026 05:59:16 +0000 (0:00:10.903) 0:01:10.869 ******** 2026-04-16 06:00:15.092215 | orchestrator | 2026-04-16 06:00:15.092228 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-16 06:00:15.092239 | orchestrator | Thursday 16 April 2026 05:59:17 +0000 (0:00:00.062) 0:01:10.932 ******** 2026-04-16 06:00:15.092250 | orchestrator | 2026-04-16 06:00:15.092261 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-16 06:00:15.092272 | orchestrator | Thursday 16 April 2026 05:59:17 +0000 (0:00:00.065) 0:01:10.998 ******** 2026-04-16 06:00:15.092282 | orchestrator | 2026-04-16 06:00:15.092293 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-16 06:00:15.092304 | orchestrator | Thursday 16 April 2026 05:59:17 +0000 (0:00:00.068) 0:01:11.067 ******** 2026-04-16 06:00:15.092315 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.092325 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:00:15.092336 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:00:15.092347 | orchestrator | 2026-04-16 06:00:15.092357 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-16 06:00:15.092368 | orchestrator | Thursday 16 April 2026 06:00:03 +0000 (0:00:45.985) 0:01:57.052 ******** 2026-04-16 06:00:15.092379 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.092389 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:00:15.092400 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:00:15.092410 | orchestrator | 2026-04-16 06:00:15.092421 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-16 06:00:15.092432 | orchestrator | Thursday 16 April 2026 06:00:07 +0000 (0:00:04.869) 0:02:01.922 ******** 2026-04-16 06:00:15.092443 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:00:15.092454 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:00:15.092464 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:00:15.092475 | orchestrator | 2026-04-16 06:00:15.092501 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 06:00:15.092512 | orchestrator | Thursday 16 April 2026 06:00:14 +0000 (0:00:06.565) 0:02:08.487 ******** 2026-04-16 06:00:15.092532 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:01:05.812326 | orchestrator | 2026-04-16 06:01:05.812448 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-04-16 06:01:05.812474 | orchestrator | Thursday 16 April 2026 06:00:15 +0000 (0:00:00.519) 0:02:09.007 ******** 2026-04-16 06:01:05.812496 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:01:05.812515 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:01:05.812535 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:01:05.812547 | orchestrator | 2026-04-16 06:01:05.812559 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-04-16 06:01:05.812570 | orchestrator | Thursday 16 April 2026 06:00:16 +0000 (0:00:01.146) 0:02:10.153 ******** 2026-04-16 06:01:05.812582 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:01:05.812593 | orchestrator | 2026-04-16 06:01:05.812604 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-04-16 06:01:05.812616 | orchestrator | Thursday 16 April 2026 06:00:18 +0000 (0:00:01.834) 0:02:11.988 ******** 2026-04-16 06:01:05.812627 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-04-16 06:01:05.812638 | orchestrator | 2026-04-16 06:01:05.812649 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-04-16 06:01:05.812660 | orchestrator | Thursday 16 April 2026 06:00:30 +0000 (0:00:12.143) 0:02:24.131 ******** 2026-04-16 06:01:05.812671 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-04-16 06:01:05.812682 | orchestrator | 2026-04-16 06:01:05.812693 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-04-16 06:01:05.812704 | orchestrator | Thursday 16 April 2026 06:00:54 +0000 (0:00:24.275) 0:02:48.407 ******** 2026-04-16 06:01:05.812814 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-04-16 06:01:05.812829 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-04-16 06:01:05.812843 | orchestrator | 2026-04-16 06:01:05.812856 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-04-16 06:01:05.812869 | orchestrator | Thursday 16 April 2026 06:01:00 +0000 (0:00:06.375) 0:02:54.783 ******** 2026-04-16 06:01:05.812882 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:05.812895 | orchestrator | 2026-04-16 06:01:05.812909 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-04-16 06:01:05.812922 | orchestrator | Thursday 16 April 2026 06:01:00 +0000 (0:00:00.114) 0:02:54.898 ******** 2026-04-16 06:01:05.812935 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:05.812951 | orchestrator | 2026-04-16 06:01:05.812972 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-04-16 06:01:05.813010 | orchestrator | Thursday 16 April 2026 06:01:01 +0000 (0:00:00.107) 0:02:55.005 ******** 2026-04-16 06:01:05.813032 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:05.813052 | orchestrator | 2026-04-16 06:01:05.813069 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-04-16 06:01:05.813083 | orchestrator | Thursday 16 April 2026 06:01:01 +0000 (0:00:00.115) 0:02:55.121 ******** 2026-04-16 06:01:05.813096 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:05.813110 | orchestrator | 2026-04-16 06:01:05.813139 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-04-16 06:01:05.813163 | orchestrator | Thursday 16 April 2026 06:01:01 +0000 (0:00:00.456) 0:02:55.577 ******** 2026-04-16 06:01:05.813176 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:01:05.813189 | orchestrator | 2026-04-16 06:01:05.813203 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 06:01:05.813217 | orchestrator | Thursday 16 April 2026 06:01:05 +0000 (0:00:03.418) 0:02:58.995 ******** 2026-04-16 06:01:05.813229 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:05.813240 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:01:05.813251 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:01:05.813262 | orchestrator | 2026-04-16 06:01:05.813273 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:01:05.813285 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 06:01:05.813298 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 06:01:05.813309 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 06:01:05.813320 | orchestrator | 2026-04-16 06:01:05.813331 | orchestrator | 2026-04-16 06:01:05.813342 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:01:05.813353 | orchestrator | Thursday 16 April 2026 06:01:05 +0000 (0:00:00.417) 0:02:59.413 ******** 2026-04-16 06:01:05.813364 | orchestrator | =============================================================================== 2026-04-16 06:01:05.813374 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 45.99s 2026-04-16 06:01:05.813385 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.28s 2026-04-16 06:01:05.813396 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.74s 2026-04-16 06:01:05.813407 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.14s 2026-04-16 06:01:05.813417 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.90s 2026-04-16 06:01:05.813428 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.44s 2026-04-16 06:01:05.813439 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.57s 2026-04-16 06:01:05.813466 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.38s 2026-04-16 06:01:05.813487 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.87s 2026-04-16 06:01:05.813532 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.53s 2026-04-16 06:01:05.813555 | orchestrator | keystone : Creating default user role ----------------------------------- 3.42s 2026-04-16 06:01:05.813574 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2026-04-16 06:01:05.813594 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.09s 2026-04-16 06:01:05.813606 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.51s 2026-04-16 06:01:05.813617 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.29s 2026-04-16 06:01:05.813628 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.28s 2026-04-16 06:01:05.813638 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.15s 2026-04-16 06:01:05.813649 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.83s 2026-04-16 06:01:05.813660 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.73s 2026-04-16 06:01:05.813670 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.59s 2026-04-16 06:01:07.966130 | orchestrator | 2026-04-16 06:01:07 | INFO  | Task d2583c0f-ee93-4c2f-9d93-f04a292df439 (placement) was prepared for execution. 2026-04-16 06:01:07.966233 | orchestrator | 2026-04-16 06:01:07 | INFO  | It takes a moment until task d2583c0f-ee93-4c2f-9d93-f04a292df439 (placement) has been started and output is visible here. 2026-04-16 06:01:43.095421 | orchestrator | 2026-04-16 06:01:43.095504 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:01:43.095510 | orchestrator | 2026-04-16 06:01:43.095515 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:01:43.095520 | orchestrator | Thursday 16 April 2026 06:01:11 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-04-16 06:01:43.095524 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:01:43.095530 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:01:43.095534 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:01:43.095538 | orchestrator | 2026-04-16 06:01:43.095542 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:01:43.095546 | orchestrator | Thursday 16 April 2026 06:01:11 +0000 (0:00:00.269) 0:00:00.454 ******** 2026-04-16 06:01:43.095550 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-16 06:01:43.095555 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-16 06:01:43.095569 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-16 06:01:43.095573 | orchestrator | 2026-04-16 06:01:43.095577 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-16 06:01:43.095580 | orchestrator | 2026-04-16 06:01:43.095584 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-16 06:01:43.095588 | orchestrator | Thursday 16 April 2026 06:01:12 +0000 (0:00:00.358) 0:00:00.812 ******** 2026-04-16 06:01:43.095592 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:01:43.095597 | orchestrator | 2026-04-16 06:01:43.095601 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-04-16 06:01:43.095604 | orchestrator | Thursday 16 April 2026 06:01:12 +0000 (0:00:00.494) 0:00:01.307 ******** 2026-04-16 06:01:43.095608 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-04-16 06:01:43.095612 | orchestrator | 2026-04-16 06:01:43.095616 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-04-16 06:01:43.095620 | orchestrator | Thursday 16 April 2026 06:01:16 +0000 (0:00:04.073) 0:00:05.381 ******** 2026-04-16 06:01:43.095623 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-16 06:01:43.095640 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-16 06:01:43.095644 | orchestrator | 2026-04-16 06:01:43.095648 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-16 06:01:43.095651 | orchestrator | Thursday 16 April 2026 06:01:23 +0000 (0:00:06.951) 0:00:12.333 ******** 2026-04-16 06:01:43.095655 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-04-16 06:01:43.095659 | orchestrator | 2026-04-16 06:01:43.095663 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-16 06:01:43.095667 | orchestrator | Thursday 16 April 2026 06:01:27 +0000 (0:00:03.932) 0:00:16.265 ******** 2026-04-16 06:01:43.095670 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:01:43.095674 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-16 06:01:43.095678 | orchestrator | 2026-04-16 06:01:43.095682 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-16 06:01:43.095685 | orchestrator | Thursday 16 April 2026 06:01:31 +0000 (0:00:04.052) 0:00:20.318 ******** 2026-04-16 06:01:43.095689 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:01:43.095693 | orchestrator | 2026-04-16 06:01:43.095697 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-04-16 06:01:43.095735 | orchestrator | Thursday 16 April 2026 06:01:34 +0000 (0:00:03.255) 0:00:23.573 ******** 2026-04-16 06:01:43.095740 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-16 06:01:43.095744 | orchestrator | 2026-04-16 06:01:43.095748 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-16 06:01:43.095751 | orchestrator | Thursday 16 April 2026 06:01:39 +0000 (0:00:04.144) 0:00:27.718 ******** 2026-04-16 06:01:43.095755 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:43.095759 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:01:43.095762 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:01:43.095766 | orchestrator | 2026-04-16 06:01:43.095770 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-16 06:01:43.095774 | orchestrator | Thursday 16 April 2026 06:01:39 +0000 (0:00:00.274) 0:00:27.993 ******** 2026-04-16 06:01:43.095780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:43.095802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:43.095811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:43.095815 | orchestrator | 2026-04-16 06:01:43.095819 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-16 06:01:43.095823 | orchestrator | Thursday 16 April 2026 06:01:40 +0000 (0:00:00.983) 0:00:28.976 ******** 2026-04-16 06:01:43.095827 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:43.095831 | orchestrator | 2026-04-16 06:01:43.095835 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-16 06:01:43.095838 | orchestrator | Thursday 16 April 2026 06:01:40 +0000 (0:00:00.295) 0:00:29.272 ******** 2026-04-16 06:01:43.095842 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:43.095846 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:01:43.095850 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:01:43.095853 | orchestrator | 2026-04-16 06:01:43.095857 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-16 06:01:43.095861 | orchestrator | Thursday 16 April 2026 06:01:40 +0000 (0:00:00.293) 0:00:29.566 ******** 2026-04-16 06:01:43.095865 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:01:43.095869 | orchestrator | 2026-04-16 06:01:43.095873 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-16 06:01:43.095876 | orchestrator | Thursday 16 April 2026 06:01:41 +0000 (0:00:00.505) 0:00:30.071 ******** 2026-04-16 06:01:43.095880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:43.095889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:45.872119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:45.872219 | orchestrator | 2026-04-16 06:01:45.872235 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-16 06:01:45.872253 | orchestrator | Thursday 16 April 2026 06:01:43 +0000 (0:00:01.605) 0:00:31.677 ******** 2026-04-16 06:01:45.872272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:45.872291 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:45.872310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:45.872325 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:01:45.872335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:45.872367 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:01:45.872378 | orchestrator | 2026-04-16 06:01:45.872388 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-16 06:01:45.872415 | orchestrator | Thursday 16 April 2026 06:01:43 +0000 (0:00:00.549) 0:00:32.227 ******** 2026-04-16 06:01:45.872433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:45.872444 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:45.872454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:45.872464 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:01:45.872474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:45.872483 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:01:45.872493 | orchestrator | 2026-04-16 06:01:45.872502 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-16 06:01:45.872512 | orchestrator | Thursday 16 April 2026 06:01:44 +0000 (0:00:00.691) 0:00:32.918 ******** 2026-04-16 06:01:45.872522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:45.872552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:52.384612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:52.384806 | orchestrator | 2026-04-16 06:01:52.384828 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-16 06:01:52.384845 | orchestrator | Thursday 16 April 2026 06:01:45 +0000 (0:00:01.536) 0:00:34.455 ******** 2026-04-16 06:01:52.384861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:52.384877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:52.384937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:01:52.384953 | orchestrator | 2026-04-16 06:01:52.384966 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-16 06:01:52.384979 | orchestrator | Thursday 16 April 2026 06:01:47 +0000 (0:00:02.110) 0:00:36.565 ******** 2026-04-16 06:01:52.385013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-16 06:01:52.385030 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-16 06:01:52.385044 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-04-16 06:01:52.385059 | orchestrator | 2026-04-16 06:01:52.385072 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-16 06:01:52.385084 | orchestrator | Thursday 16 April 2026 06:01:49 +0000 (0:00:01.430) 0:00:37.995 ******** 2026-04-16 06:01:52.385097 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:01:52.385112 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:01:52.385126 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:01:52.385141 | orchestrator | 2026-04-16 06:01:52.385156 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-16 06:01:52.385171 | orchestrator | Thursday 16 April 2026 06:01:50 +0000 (0:00:01.271) 0:00:39.267 ******** 2026-04-16 06:01:52.385185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:52.385209 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:01:52.385223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:52.385237 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:01:52.385252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-04-16 06:01:52.385266 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:01:52.385279 | orchestrator | 2026-04-16 06:01:52.385298 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-04-16 06:01:52.385312 | orchestrator | Thursday 16 April 2026 06:01:51 +0000 (0:00:00.695) 0:00:39.962 ******** 2026-04-16 06:01:52.385337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:02:20.777157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:02:20.777297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-04-16 06:02:20.777315 | orchestrator | 2026-04-16 06:02:20.777329 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-16 06:02:20.777342 | orchestrator | Thursday 16 April 2026 06:01:52 +0000 (0:00:01.011) 0:00:40.974 ******** 2026-04-16 06:02:20.777354 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:02:20.777366 | orchestrator | 2026-04-16 06:02:20.777378 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-16 06:02:20.777389 | orchestrator | Thursday 16 April 2026 06:01:54 +0000 (0:00:02.068) 0:00:43.042 ******** 2026-04-16 06:02:20.777401 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:02:20.777412 | orchestrator | 2026-04-16 06:02:20.777423 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-16 06:02:20.777434 | orchestrator | Thursday 16 April 2026 06:01:56 +0000 (0:00:02.272) 0:00:45.315 ******** 2026-04-16 06:02:20.777445 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:02:20.777456 | orchestrator | 2026-04-16 06:02:20.777467 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-16 06:02:20.777478 | orchestrator | Thursday 16 April 2026 06:02:10 +0000 (0:00:13.463) 0:00:58.778 ******** 2026-04-16 06:02:20.777489 | orchestrator | 2026-04-16 06:02:20.777519 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-16 06:02:20.777530 | orchestrator | Thursday 16 April 2026 06:02:10 +0000 (0:00:00.068) 0:00:58.846 ******** 2026-04-16 06:02:20.777553 | orchestrator | 2026-04-16 06:02:20.777576 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-16 06:02:20.777598 | orchestrator | Thursday 16 April 2026 06:02:10 +0000 (0:00:00.068) 0:00:58.914 ******** 2026-04-16 06:02:20.777609 | orchestrator | 2026-04-16 06:02:20.777620 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-16 06:02:20.777645 | orchestrator | Thursday 16 April 2026 06:02:10 +0000 (0:00:00.071) 0:00:58.986 ******** 2026-04-16 06:02:20.777657 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:02:20.777668 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:02:20.777679 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:02:20.777690 | orchestrator | 2026-04-16 06:02:20.777735 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:02:20.777751 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:02:20.777764 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:02:20.777775 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:02:20.777786 | orchestrator | 2026-04-16 06:02:20.777797 | orchestrator | 2026-04-16 06:02:20.777808 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:02:20.777829 | orchestrator | Thursday 16 April 2026 06:02:20 +0000 (0:00:10.088) 0:01:09.074 ******** 2026-04-16 06:02:20.777840 | orchestrator | =============================================================================== 2026-04-16 06:02:20.777851 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.46s 2026-04-16 06:02:20.777879 | orchestrator | placement : Restart placement-api container ---------------------------- 10.09s 2026-04-16 06:02:20.777891 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.95s 2026-04-16 06:02:20.777903 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.14s 2026-04-16 06:02:20.777913 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.07s 2026-04-16 06:02:20.777924 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.05s 2026-04-16 06:02:20.777935 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.93s 2026-04-16 06:02:20.777946 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.26s 2026-04-16 06:02:20.777956 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.27s 2026-04-16 06:02:20.777967 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.11s 2026-04-16 06:02:20.777978 | orchestrator | placement : Creating placement databases -------------------------------- 2.07s 2026-04-16 06:02:20.777989 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.61s 2026-04-16 06:02:20.778000 | orchestrator | placement : Copying over config.json files for services ----------------- 1.54s 2026-04-16 06:02:20.778011 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.43s 2026-04-16 06:02:20.778079 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.27s 2026-04-16 06:02:20.778091 | orchestrator | placement : Check placement containers ---------------------------------- 1.01s 2026-04-16 06:02:20.778101 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.98s 2026-04-16 06:02:20.778112 | orchestrator | placement : Copying over existing policy file --------------------------- 0.70s 2026-04-16 06:02:20.778132 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.69s 2026-04-16 06:02:20.778143 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.55s 2026-04-16 06:02:22.967085 | orchestrator | 2026-04-16 06:02:22 | INFO  | Task 7b79e919-15b8-48f1-8883-2335f942a207 (neutron) was prepared for execution. 2026-04-16 06:02:22.967189 | orchestrator | 2026-04-16 06:02:22 | INFO  | It takes a moment until task 7b79e919-15b8-48f1-8883-2335f942a207 (neutron) has been started and output is visible here. 2026-04-16 06:03:08.949032 | orchestrator | 2026-04-16 06:03:08.949187 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:03:08.949236 | orchestrator | 2026-04-16 06:03:08.949258 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:03:08.949279 | orchestrator | Thursday 16 April 2026 06:02:26 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-04-16 06:03:08.949298 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:03:08.949323 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:03:08.949349 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:03:08.949368 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:03:08.949386 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:03:08.949404 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:03:08.949424 | orchestrator | 2026-04-16 06:03:08.949442 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:03:08.949462 | orchestrator | Thursday 16 April 2026 06:02:27 +0000 (0:00:00.644) 0:00:00.889 ******** 2026-04-16 06:03:08.949482 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-16 06:03:08.949500 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-16 06:03:08.949519 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-16 06:03:08.949539 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-16 06:03:08.949595 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-16 06:03:08.949617 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-16 06:03:08.949636 | orchestrator | 2026-04-16 06:03:08.949656 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-16 06:03:08.949675 | orchestrator | 2026-04-16 06:03:08.949772 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 06:03:08.949812 | orchestrator | Thursday 16 April 2026 06:02:28 +0000 (0:00:00.571) 0:00:01.461 ******** 2026-04-16 06:03:08.949834 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:03:08.949855 | orchestrator | 2026-04-16 06:03:08.949875 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-16 06:03:08.949895 | orchestrator | Thursday 16 April 2026 06:02:29 +0000 (0:00:01.135) 0:00:02.596 ******** 2026-04-16 06:03:08.949915 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:03:08.949935 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:03:08.949953 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:03:08.949972 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:03:08.949991 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:03:08.950010 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:03:08.950104 | orchestrator | 2026-04-16 06:03:08.950125 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-16 06:03:08.950145 | orchestrator | Thursday 16 April 2026 06:02:30 +0000 (0:00:01.211) 0:00:03.807 ******** 2026-04-16 06:03:08.950177 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:03:08.950197 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:03:08.950217 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:03:08.950238 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:03:08.950257 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:03:08.950276 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:03:08.950294 | orchestrator | 2026-04-16 06:03:08.950314 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-16 06:03:08.950335 | orchestrator | Thursday 16 April 2026 06:02:31 +0000 (0:00:01.050) 0:00:04.858 ******** 2026-04-16 06:03:08.950356 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 06:03:08.950376 | orchestrator |  "changed": false, 2026-04-16 06:03:08.950395 | orchestrator |  "msg": "All assertions passed" 2026-04-16 06:03:08.950414 | orchestrator | } 2026-04-16 06:03:08.950433 | orchestrator | ok: [testbed-node-1] => { 2026-04-16 06:03:08.950454 | orchestrator |  "changed": false, 2026-04-16 06:03:08.950473 | orchestrator |  "msg": "All assertions passed" 2026-04-16 06:03:08.950493 | orchestrator | } 2026-04-16 06:03:08.950506 | orchestrator | ok: [testbed-node-2] => { 2026-04-16 06:03:08.950517 | orchestrator |  "changed": false, 2026-04-16 06:03:08.950528 | orchestrator |  "msg": "All assertions passed" 2026-04-16 06:03:08.950539 | orchestrator | } 2026-04-16 06:03:08.950550 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 06:03:08.950560 | orchestrator |  "changed": false, 2026-04-16 06:03:08.950571 | orchestrator |  "msg": "All assertions passed" 2026-04-16 06:03:08.950581 | orchestrator | } 2026-04-16 06:03:08.950592 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 06:03:08.950604 | orchestrator |  "changed": false, 2026-04-16 06:03:08.950615 | orchestrator |  "msg": "All assertions passed" 2026-04-16 06:03:08.950755 | orchestrator | } 2026-04-16 06:03:08.950770 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 06:03:08.950781 | orchestrator |  "changed": false, 2026-04-16 06:03:08.950792 | orchestrator |  "msg": "All assertions passed" 2026-04-16 06:03:08.950803 | orchestrator | } 2026-04-16 06:03:08.950825 | orchestrator | 2026-04-16 06:03:08.950837 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-16 06:03:08.950848 | orchestrator | Thursday 16 April 2026 06:02:32 +0000 (0:00:00.749) 0:00:05.608 ******** 2026-04-16 06:03:08.950860 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:08.950885 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:08.950896 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:08.950907 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:08.950918 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:08.950929 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:08.950940 | orchestrator | 2026-04-16 06:03:08.950951 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-04-16 06:03:08.950962 | orchestrator | Thursday 16 April 2026 06:02:32 +0000 (0:00:00.557) 0:00:06.166 ******** 2026-04-16 06:03:08.950973 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-04-16 06:03:08.950984 | orchestrator | 2026-04-16 06:03:08.951001 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-04-16 06:03:08.951020 | orchestrator | Thursday 16 April 2026 06:02:36 +0000 (0:00:03.512) 0:00:09.678 ******** 2026-04-16 06:03:08.951038 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-04-16 06:03:08.951058 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-04-16 06:03:08.951075 | orchestrator | 2026-04-16 06:03:08.951120 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-04-16 06:03:08.951140 | orchestrator | Thursday 16 April 2026 06:02:42 +0000 (0:00:06.562) 0:00:16.241 ******** 2026-04-16 06:03:08.951159 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:03:08.951177 | orchestrator | 2026-04-16 06:03:08.951194 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-04-16 06:03:08.951212 | orchestrator | Thursday 16 April 2026 06:02:46 +0000 (0:00:03.224) 0:00:19.465 ******** 2026-04-16 06:03:08.951230 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:03:08.951248 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-04-16 06:03:08.951268 | orchestrator | 2026-04-16 06:03:08.951288 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-04-16 06:03:08.951306 | orchestrator | Thursday 16 April 2026 06:02:50 +0000 (0:00:03.816) 0:00:23.281 ******** 2026-04-16 06:03:08.951325 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:03:08.951344 | orchestrator | 2026-04-16 06:03:08.951363 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-04-16 06:03:08.951382 | orchestrator | Thursday 16 April 2026 06:02:53 +0000 (0:00:03.125) 0:00:26.406 ******** 2026-04-16 06:03:08.951394 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-04-16 06:03:08.951405 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-04-16 06:03:08.951416 | orchestrator | 2026-04-16 06:03:08.951427 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 06:03:08.951437 | orchestrator | Thursday 16 April 2026 06:03:00 +0000 (0:00:07.506) 0:00:33.913 ******** 2026-04-16 06:03:08.951448 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:08.951459 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:08.951481 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:08.951492 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:08.951503 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:08.951513 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:08.951524 | orchestrator | 2026-04-16 06:03:08.951536 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-16 06:03:08.951547 | orchestrator | Thursday 16 April 2026 06:03:01 +0000 (0:00:00.737) 0:00:34.651 ******** 2026-04-16 06:03:08.951558 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:08.951568 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:08.951579 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:08.951590 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:08.951600 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:08.951611 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:08.951622 | orchestrator | 2026-04-16 06:03:08.951647 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-16 06:03:08.951658 | orchestrator | Thursday 16 April 2026 06:03:03 +0000 (0:00:01.899) 0:00:36.551 ******** 2026-04-16 06:03:08.951669 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:03:08.951738 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:03:08.951752 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:03:08.951763 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:03:08.951774 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:03:08.951785 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:03:08.951796 | orchestrator | 2026-04-16 06:03:08.951807 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-16 06:03:08.951818 | orchestrator | Thursday 16 April 2026 06:03:04 +0000 (0:00:01.086) 0:00:37.638 ******** 2026-04-16 06:03:08.951829 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:08.951840 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:08.951851 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:08.951862 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:08.951872 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:08.951883 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:08.951894 | orchestrator | 2026-04-16 06:03:08.951905 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-16 06:03:08.951916 | orchestrator | Thursday 16 April 2026 06:03:06 +0000 (0:00:02.125) 0:00:39.763 ******** 2026-04-16 06:03:08.951932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:08.951963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:14.189083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:14.189216 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:14.189234 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:14.189246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:14.189259 | orchestrator | 2026-04-16 06:03:14.189272 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-16 06:03:14.189284 | orchestrator | Thursday 16 April 2026 06:03:08 +0000 (0:00:02.442) 0:00:42.205 ******** 2026-04-16 06:03:14.189296 | orchestrator | [WARNING]: Skipped 2026-04-16 06:03:14.189307 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-16 06:03:14.189319 | orchestrator | due to this access issue: 2026-04-16 06:03:14.189331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-16 06:03:14.189342 | orchestrator | a directory 2026-04-16 06:03:14.189353 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:03:14.189364 | orchestrator | 2026-04-16 06:03:14.189375 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 06:03:14.189386 | orchestrator | Thursday 16 April 2026 06:03:09 +0000 (0:00:00.783) 0:00:42.989 ******** 2026-04-16 06:03:14.189398 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:03:14.189410 | orchestrator | 2026-04-16 06:03:14.189421 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-16 06:03:14.189449 | orchestrator | Thursday 16 April 2026 06:03:10 +0000 (0:00:01.188) 0:00:44.177 ******** 2026-04-16 06:03:14.189468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:14.189489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:14.189501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:14.189513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:14.189533 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:18.076496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:18.076600 | orchestrator | 2026-04-16 06:03:18.076618 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-16 06:03:18.076630 | orchestrator | Thursday 16 April 2026 06:03:14 +0000 (0:00:03.266) 0:00:47.444 ******** 2026-04-16 06:03:18.076645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:18.076658 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:18.076671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:18.076735 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:18.076747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:18.076759 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:18.076814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:18.076827 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:18.076845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:18.076857 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:18.076868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:18.076879 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:18.076890 | orchestrator | 2026-04-16 06:03:18.076902 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-16 06:03:18.076913 | orchestrator | Thursday 16 April 2026 06:03:15 +0000 (0:00:01.553) 0:00:48.997 ******** 2026-04-16 06:03:18.076924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:18.076936 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:18.076955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:22.445074 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:22.445202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:22.445223 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:22.445238 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:22.445255 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:22.445277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:22.445295 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:22.445316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:22.445362 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:22.445384 | orchestrator | 2026-04-16 06:03:22.445404 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-16 06:03:22.445426 | orchestrator | Thursday 16 April 2026 06:03:18 +0000 (0:00:02.337) 0:00:51.335 ******** 2026-04-16 06:03:22.445447 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:22.445466 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:22.445484 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:22.445495 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:22.445505 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:22.445516 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:22.445526 | orchestrator | 2026-04-16 06:03:22.445538 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-16 06:03:22.445549 | orchestrator | Thursday 16 April 2026 06:03:19 +0000 (0:00:01.922) 0:00:53.258 ******** 2026-04-16 06:03:22.445559 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:22.445570 | orchestrator | 2026-04-16 06:03:22.445581 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-16 06:03:22.445611 | orchestrator | Thursday 16 April 2026 06:03:20 +0000 (0:00:00.112) 0:00:53.370 ******** 2026-04-16 06:03:22.445626 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:22.445645 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:22.445663 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:22.445724 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:22.445743 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:22.445764 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:22.445783 | orchestrator | 2026-04-16 06:03:22.445802 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-16 06:03:22.445816 | orchestrator | Thursday 16 April 2026 06:03:20 +0000 (0:00:00.516) 0:00:53.886 ******** 2026-04-16 06:03:22.445837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:22.445850 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:22.445861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:22.445884 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:22.445895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:22.445907 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:22.445919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:22.445930 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:22.445957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:29.346820 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:29.346908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:29.346920 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:29.346928 | orchestrator | 2026-04-16 06:03:29.346935 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-16 06:03:29.346943 | orchestrator | Thursday 16 April 2026 06:03:22 +0000 (0:00:01.815) 0:00:55.702 ******** 2026-04-16 06:03:29.346950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:29.346977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:29.346984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:29.347016 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:29.347024 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:29.347067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:29.347075 | orchestrator | 2026-04-16 06:03:29.347082 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-16 06:03:29.347088 | orchestrator | Thursday 16 April 2026 06:03:25 +0000 (0:00:02.647) 0:00:58.350 ******** 2026-04-16 06:03:29.347095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:29.347101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:29.347120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:33.174547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:33.174753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:03:33.174786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:33.174802 | orchestrator | 2026-04-16 06:03:33.174815 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-16 06:03:33.174828 | orchestrator | Thursday 16 April 2026 06:03:29 +0000 (0:00:04.255) 0:01:02.605 ******** 2026-04-16 06:03:33.174856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:33.174868 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:33.174901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:33.174923 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:33.174935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:33.174947 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:33.174958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:33.174969 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:33.174981 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:33.174992 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:33.175009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:33.175021 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:33.175039 | orchestrator | 2026-04-16 06:03:33.175050 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-16 06:03:33.175061 | orchestrator | Thursday 16 April 2026 06:03:31 +0000 (0:00:01.675) 0:01:04.280 ******** 2026-04-16 06:03:33.175075 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:33.175088 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:33.175100 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:33.175113 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:03:33.175125 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:03:33.175145 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:03:49.866637 | orchestrator | 2026-04-16 06:03:49.866865 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-16 06:03:49.866884 | orchestrator | Thursday 16 April 2026 06:03:33 +0000 (0:00:02.150) 0:01:06.431 ******** 2026-04-16 06:03:49.866901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:49.866918 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.866930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:49.866942 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.866954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:49.866965 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.866996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:49.867059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:49.867073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:03:49.867084 | orchestrator | 2026-04-16 06:03:49.867096 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-04-16 06:03:49.867107 | orchestrator | Thursday 16 April 2026 06:03:36 +0000 (0:00:02.863) 0:01:09.294 ******** 2026-04-16 06:03:49.867118 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:49.867129 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867140 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867150 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.867173 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.867185 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.867195 | orchestrator | 2026-04-16 06:03:49.867206 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-16 06:03:49.867217 | orchestrator | Thursday 16 April 2026 06:03:38 +0000 (0:00:02.114) 0:01:11.409 ******** 2026-04-16 06:03:49.867228 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867239 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867249 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:49.867260 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.867271 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.867281 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.867292 | orchestrator | 2026-04-16 06:03:49.867302 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-16 06:03:49.867313 | orchestrator | Thursday 16 April 2026 06:03:40 +0000 (0:00:01.908) 0:01:13.317 ******** 2026-04-16 06:03:49.867325 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867336 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:49.867357 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867368 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.867379 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.867390 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.867406 | orchestrator | 2026-04-16 06:03:49.867417 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-16 06:03:49.867428 | orchestrator | Thursday 16 April 2026 06:03:42 +0000 (0:00:02.006) 0:01:15.323 ******** 2026-04-16 06:03:49.867439 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867449 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:49.867460 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867471 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.867481 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.867492 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.867502 | orchestrator | 2026-04-16 06:03:49.867513 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-16 06:03:49.867524 | orchestrator | Thursday 16 April 2026 06:03:44 +0000 (0:00:02.379) 0:01:17.703 ******** 2026-04-16 06:03:49.867535 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867545 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:49.867556 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.867566 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867577 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.867588 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.867598 | orchestrator | 2026-04-16 06:03:49.867609 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-16 06:03:49.867620 | orchestrator | Thursday 16 April 2026 06:03:46 +0000 (0:00:01.808) 0:01:19.511 ******** 2026-04-16 06:03:49.867631 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867648 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:49.867659 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867691 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:49.867702 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:49.867713 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:49.867723 | orchestrator | 2026-04-16 06:03:49.867734 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-16 06:03:49.867745 | orchestrator | Thursday 16 April 2026 06:03:48 +0000 (0:00:01.866) 0:01:21.378 ******** 2026-04-16 06:03:49.867756 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 06:03:49.867767 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:49.867778 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 06:03:49.867789 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:49.867800 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 06:03:49.867818 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:53.308998 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 06:03:53.309114 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:53.309131 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 06:03:53.309142 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:53.309153 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 06:03:53.309164 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:53.309175 | orchestrator | 2026-04-16 06:03:53.309187 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-16 06:03:53.309199 | orchestrator | Thursday 16 April 2026 06:03:49 +0000 (0:00:01.743) 0:01:23.121 ******** 2026-04-16 06:03:53.309214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:53.309257 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:03:53.309270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:53.309281 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:53.309308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:53.309320 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:53.309350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:53.309362 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:03:53.309373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:53.309393 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:03:53.309404 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:03:53.309415 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:03:53.309426 | orchestrator | 2026-04-16 06:03:53.309437 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-16 06:03:53.309448 | orchestrator | Thursday 16 April 2026 06:03:51 +0000 (0:00:01.796) 0:01:24.917 ******** 2026-04-16 06:03:53.309459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:53.309471 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:03:53.309488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:03:53.309499 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:03:53.309519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:04:15.055610 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.055762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:04:15.055780 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.055790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:04:15.055798 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.055807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:04:15.055815 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.055824 | orchestrator | 2026-04-16 06:04:15.055832 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-16 06:04:15.055842 | orchestrator | Thursday 16 April 2026 06:03:53 +0000 (0:00:01.650) 0:01:26.568 ******** 2026-04-16 06:04:15.055850 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.055858 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.055866 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.055874 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.055895 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.055904 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.055912 | orchestrator | 2026-04-16 06:04:15.055920 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-16 06:04:15.055928 | orchestrator | Thursday 16 April 2026 06:03:55 +0000 (0:00:02.142) 0:01:28.710 ******** 2026-04-16 06:04:15.055935 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.055943 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.055951 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.055959 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:04:15.055967 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:04:15.055975 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:04:15.055999 | orchestrator | 2026-04-16 06:04:15.056008 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-16 06:04:15.056016 | orchestrator | Thursday 16 April 2026 06:03:58 +0000 (0:00:03.376) 0:01:32.087 ******** 2026-04-16 06:04:15.056024 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056032 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056039 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056047 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056055 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056063 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056071 | orchestrator | 2026-04-16 06:04:15.056078 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-16 06:04:15.056086 | orchestrator | Thursday 16 April 2026 06:04:00 +0000 (0:00:01.895) 0:01:33.983 ******** 2026-04-16 06:04:15.056094 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056102 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056110 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056117 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056125 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056138 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056152 | orchestrator | 2026-04-16 06:04:15.056165 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-16 06:04:15.056205 | orchestrator | Thursday 16 April 2026 06:04:02 +0000 (0:00:02.032) 0:01:36.015 ******** 2026-04-16 06:04:15.056234 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056250 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056263 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056275 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056288 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056301 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056314 | orchestrator | 2026-04-16 06:04:15.056327 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-16 06:04:15.056341 | orchestrator | Thursday 16 April 2026 06:04:04 +0000 (0:00:01.913) 0:01:37.928 ******** 2026-04-16 06:04:15.056355 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056368 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056382 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056396 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056407 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056420 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056434 | orchestrator | 2026-04-16 06:04:15.056449 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-16 06:04:15.056463 | orchestrator | Thursday 16 April 2026 06:04:06 +0000 (0:00:01.581) 0:01:39.509 ******** 2026-04-16 06:04:15.056477 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056492 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056507 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056522 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056536 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056545 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056553 | orchestrator | 2026-04-16 06:04:15.056561 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-16 06:04:15.056569 | orchestrator | Thursday 16 April 2026 06:04:07 +0000 (0:00:01.618) 0:01:41.128 ******** 2026-04-16 06:04:15.056577 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056584 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056592 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056600 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056607 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056615 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056623 | orchestrator | 2026-04-16 06:04:15.056631 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-16 06:04:15.056639 | orchestrator | Thursday 16 April 2026 06:04:09 +0000 (0:00:01.583) 0:01:42.711 ******** 2026-04-16 06:04:15.056656 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056682 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056691 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056699 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056707 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056715 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056722 | orchestrator | 2026-04-16 06:04:15.056730 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-16 06:04:15.056738 | orchestrator | Thursday 16 April 2026 06:04:11 +0000 (0:00:01.766) 0:01:44.478 ******** 2026-04-16 06:04:15.056746 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 06:04:15.056755 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:15.056763 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 06:04:15.056771 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056779 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 06:04:15.056787 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:15.056795 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 06:04:15.056803 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:15.056811 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 06:04:15.056819 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:15.056833 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 06:04:15.056842 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:15.056850 | orchestrator | 2026-04-16 06:04:15.056858 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-16 06:04:15.056866 | orchestrator | Thursday 16 April 2026 06:04:12 +0000 (0:00:01.620) 0:01:46.098 ******** 2026-04-16 06:04:15.056876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:04:15.056885 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:04:15.056903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:04:17.305130 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:04:17.305229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-04-16 06:04:17.305245 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:04:17.305256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:04:17.305266 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:04:17.305292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:04:17.305302 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:04:17.305311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 06:04:17.305320 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:04:17.305329 | orchestrator | 2026-04-16 06:04:17.305339 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-04-16 06:04:17.305349 | orchestrator | Thursday 16 April 2026 06:04:15 +0000 (0:00:02.210) 0:01:48.309 ******** 2026-04-16 06:04:17.305374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:04:17.305403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:04:17.305417 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:04:17.305427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-04-16 06:04:17.305437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:04:17.305457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 06:06:34.847328 | orchestrator | 2026-04-16 06:06:34.847430 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 06:06:34.847447 | orchestrator | Thursday 16 April 2026 06:04:17 +0000 (0:00:02.255) 0:01:50.565 ******** 2026-04-16 06:06:34.847459 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:06:34.847471 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:06:34.847482 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:06:34.847492 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:06:34.847503 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:06:34.847514 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:06:34.847524 | orchestrator | 2026-04-16 06:06:34.847536 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-04-16 06:06:34.847546 | orchestrator | Thursday 16 April 2026 06:04:17 +0000 (0:00:00.602) 0:01:51.167 ******** 2026-04-16 06:06:34.847557 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:06:34.847568 | orchestrator | 2026-04-16 06:06:34.847579 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-04-16 06:06:34.847590 | orchestrator | Thursday 16 April 2026 06:04:19 +0000 (0:00:02.033) 0:01:53.200 ******** 2026-04-16 06:06:34.847601 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:06:34.847611 | orchestrator | 2026-04-16 06:06:34.847622 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-04-16 06:06:34.847633 | orchestrator | Thursday 16 April 2026 06:04:22 +0000 (0:00:02.210) 0:01:55.410 ******** 2026-04-16 06:06:34.847707 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:06:34.847722 | orchestrator | 2026-04-16 06:06:34.847734 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 06:06:34.847745 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:41.978) 0:02:37.389 ******** 2026-04-16 06:06:34.847756 | orchestrator | 2026-04-16 06:06:34.847767 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 06:06:34.847778 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:00.075) 0:02:37.464 ******** 2026-04-16 06:06:34.847789 | orchestrator | 2026-04-16 06:06:34.847800 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 06:06:34.847811 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:00.067) 0:02:37.532 ******** 2026-04-16 06:06:34.847821 | orchestrator | 2026-04-16 06:06:34.847832 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 06:06:34.847857 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:00.064) 0:02:37.597 ******** 2026-04-16 06:06:34.847868 | orchestrator | 2026-04-16 06:06:34.847879 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 06:06:34.847895 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:00.068) 0:02:37.665 ******** 2026-04-16 06:06:34.847916 | orchestrator | 2026-04-16 06:06:34.847936 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 06:06:34.847956 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:00.068) 0:02:37.734 ******** 2026-04-16 06:06:34.847975 | orchestrator | 2026-04-16 06:06:34.848020 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-16 06:06:34.848042 | orchestrator | Thursday 16 April 2026 06:05:04 +0000 (0:00:00.067) 0:02:37.802 ******** 2026-04-16 06:06:34.848058 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:06:34.848077 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:06:34.848096 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:06:34.848115 | orchestrator | 2026-04-16 06:06:34.848134 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-16 06:06:34.848153 | orchestrator | Thursday 16 April 2026 06:05:26 +0000 (0:00:21.693) 0:02:59.495 ******** 2026-04-16 06:06:34.848172 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:06:34.848193 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:06:34.848213 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:06:34.848232 | orchestrator | 2026-04-16 06:06:34.848253 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:06:34.848274 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 06:06:34.848295 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-16 06:06:34.848314 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-04-16 06:06:34.848333 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 06:06:34.848353 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 06:06:34.848371 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-04-16 06:06:34.848390 | orchestrator | 2026-04-16 06:06:34.848409 | orchestrator | 2026-04-16 06:06:34.848428 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:06:34.848448 | orchestrator | Thursday 16 April 2026 06:06:34 +0000 (0:01:08.190) 0:04:07.685 ******** 2026-04-16 06:06:34.848466 | orchestrator | =============================================================================== 2026-04-16 06:06:34.848485 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 68.19s 2026-04-16 06:06:34.848505 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.98s 2026-04-16 06:06:34.848523 | orchestrator | neutron : Restart neutron-server container ----------------------------- 21.69s 2026-04-16 06:06:34.848563 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.51s 2026-04-16 06:06:34.848584 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.56s 2026-04-16 06:06:34.848602 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 4.26s 2026-04-16 06:06:34.848620 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.82s 2026-04-16 06:06:34.848639 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2026-04-16 06:06:34.848680 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.38s 2026-04-16 06:06:34.848700 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.27s 2026-04-16 06:06:34.848718 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.22s 2026-04-16 06:06:34.848736 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.13s 2026-04-16 06:06:34.848755 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 2.86s 2026-04-16 06:06:34.848773 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.65s 2026-04-16 06:06:34.848792 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.44s 2026-04-16 06:06:34.848823 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 2.38s 2026-04-16 06:06:34.848843 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.34s 2026-04-16 06:06:34.848861 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.26s 2026-04-16 06:06:34.848881 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.21s 2026-04-16 06:06:34.848899 | orchestrator | neutron : Creating Neutron database user and setting permissions -------- 2.21s 2026-04-16 06:06:37.346002 | orchestrator | 2026-04-16 06:06:37 | INFO  | Task 6581bfce-82b9-49d1-bd56-dcaf573e89aa (nova) was prepared for execution. 2026-04-16 06:06:37.346202 | orchestrator | 2026-04-16 06:06:37 | INFO  | It takes a moment until task 6581bfce-82b9-49d1-bd56-dcaf573e89aa (nova) has been started and output is visible here. 2026-04-16 06:08:32.194780 | orchestrator | 2026-04-16 06:08:32.194902 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:08:32.194912 | orchestrator | 2026-04-16 06:08:32.194917 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-16 06:08:32.194924 | orchestrator | Thursday 16 April 2026 06:06:41 +0000 (0:00:00.245) 0:00:00.245 ******** 2026-04-16 06:08:32.194929 | orchestrator | changed: [testbed-manager] 2026-04-16 06:08:32.194936 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.194941 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:08:32.194946 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:08:32.194952 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:08:32.194957 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:08:32.194962 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:08:32.194967 | orchestrator | 2026-04-16 06:08:32.194972 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:08:32.194977 | orchestrator | Thursday 16 April 2026 06:06:41 +0000 (0:00:00.677) 0:00:00.923 ******** 2026-04-16 06:08:32.194982 | orchestrator | changed: [testbed-manager] 2026-04-16 06:08:32.194986 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.194991 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:08:32.194996 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:08:32.195001 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:08:32.195006 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:08:32.195011 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:08:32.195016 | orchestrator | 2026-04-16 06:08:32.195021 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:08:32.195026 | orchestrator | Thursday 16 April 2026 06:06:42 +0000 (0:00:00.667) 0:00:01.590 ******** 2026-04-16 06:08:32.195031 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-16 06:08:32.195037 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-16 06:08:32.195042 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-16 06:08:32.195047 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-16 06:08:32.195051 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-16 06:08:32.195056 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-16 06:08:32.195061 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-16 06:08:32.195066 | orchestrator | 2026-04-16 06:08:32.195071 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-16 06:08:32.195076 | orchestrator | 2026-04-16 06:08:32.195080 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-16 06:08:32.195085 | orchestrator | Thursday 16 April 2026 06:06:43 +0000 (0:00:00.539) 0:00:02.129 ******** 2026-04-16 06:08:32.195090 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:08:32.195095 | orchestrator | 2026-04-16 06:08:32.195100 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-16 06:08:32.195128 | orchestrator | Thursday 16 April 2026 06:06:43 +0000 (0:00:00.653) 0:00:02.783 ******** 2026-04-16 06:08:32.195134 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-04-16 06:08:32.195140 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-04-16 06:08:32.195145 | orchestrator | 2026-04-16 06:08:32.195150 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-16 06:08:32.195155 | orchestrator | Thursday 16 April 2026 06:06:47 +0000 (0:00:04.147) 0:00:06.931 ******** 2026-04-16 06:08:32.195160 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 06:08:32.195165 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 06:08:32.195170 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195174 | orchestrator | 2026-04-16 06:08:32.195179 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-16 06:08:32.195184 | orchestrator | Thursday 16 April 2026 06:06:51 +0000 (0:00:04.117) 0:00:11.049 ******** 2026-04-16 06:08:32.195189 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195194 | orchestrator | 2026-04-16 06:08:32.195199 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-16 06:08:32.195204 | orchestrator | Thursday 16 April 2026 06:06:52 +0000 (0:00:00.597) 0:00:11.646 ******** 2026-04-16 06:08:32.195209 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195213 | orchestrator | 2026-04-16 06:08:32.195218 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-16 06:08:32.195223 | orchestrator | Thursday 16 April 2026 06:06:53 +0000 (0:00:01.166) 0:00:12.813 ******** 2026-04-16 06:08:32.195228 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195233 | orchestrator | 2026-04-16 06:08:32.195238 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 06:08:32.195242 | orchestrator | Thursday 16 April 2026 06:06:56 +0000 (0:00:02.532) 0:00:15.345 ******** 2026-04-16 06:08:32.195247 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:08:32.195252 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195257 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195262 | orchestrator | 2026-04-16 06:08:32.195267 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-16 06:08:32.195273 | orchestrator | Thursday 16 April 2026 06:06:56 +0000 (0:00:00.297) 0:00:15.643 ******** 2026-04-16 06:08:32.195279 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:08:32.195284 | orchestrator | 2026-04-16 06:08:32.195290 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-16 06:08:32.195296 | orchestrator | Thursday 16 April 2026 06:07:28 +0000 (0:00:31.760) 0:00:47.404 ******** 2026-04-16 06:08:32.195301 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195307 | orchestrator | 2026-04-16 06:08:32.195313 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-16 06:08:32.195319 | orchestrator | Thursday 16 April 2026 06:07:42 +0000 (0:00:14.209) 0:01:01.613 ******** 2026-04-16 06:08:32.195325 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:08:32.195330 | orchestrator | 2026-04-16 06:08:32.195336 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-16 06:08:32.195355 | orchestrator | Thursday 16 April 2026 06:07:53 +0000 (0:00:11.062) 0:01:12.676 ******** 2026-04-16 06:08:32.195375 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:08:32.195381 | orchestrator | 2026-04-16 06:08:32.195387 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-16 06:08:32.195392 | orchestrator | Thursday 16 April 2026 06:07:54 +0000 (0:00:00.638) 0:01:13.314 ******** 2026-04-16 06:08:32.195398 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:08:32.195404 | orchestrator | 2026-04-16 06:08:32.195410 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 06:08:32.195415 | orchestrator | Thursday 16 April 2026 06:07:54 +0000 (0:00:00.440) 0:01:13.755 ******** 2026-04-16 06:08:32.195422 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:08:32.195433 | orchestrator | 2026-04-16 06:08:32.195439 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-16 06:08:32.195444 | orchestrator | Thursday 16 April 2026 06:07:55 +0000 (0:00:00.633) 0:01:14.389 ******** 2026-04-16 06:08:32.195450 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:08:32.195456 | orchestrator | 2026-04-16 06:08:32.195462 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-16 06:08:32.195468 | orchestrator | Thursday 16 April 2026 06:08:13 +0000 (0:00:17.723) 0:01:32.113 ******** 2026-04-16 06:08:32.195473 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:08:32.195479 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195484 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195489 | orchestrator | 2026-04-16 06:08:32.195493 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-16 06:08:32.195498 | orchestrator | 2026-04-16 06:08:32.195503 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-16 06:08:32.195508 | orchestrator | Thursday 16 April 2026 06:08:13 +0000 (0:00:00.320) 0:01:32.433 ******** 2026-04-16 06:08:32.195513 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:08:32.195517 | orchestrator | 2026-04-16 06:08:32.195522 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-16 06:08:32.195527 | orchestrator | Thursday 16 April 2026 06:08:14 +0000 (0:00:00.775) 0:01:33.208 ******** 2026-04-16 06:08:32.195532 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195537 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195541 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195546 | orchestrator | 2026-04-16 06:08:32.195551 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-16 06:08:32.195556 | orchestrator | Thursday 16 April 2026 06:08:16 +0000 (0:00:02.010) 0:01:35.219 ******** 2026-04-16 06:08:32.195560 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195565 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195570 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195575 | orchestrator | 2026-04-16 06:08:32.195579 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-16 06:08:32.195584 | orchestrator | Thursday 16 April 2026 06:08:18 +0000 (0:00:02.128) 0:01:37.347 ******** 2026-04-16 06:08:32.195589 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:08:32.195594 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195598 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195603 | orchestrator | 2026-04-16 06:08:32.195608 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-16 06:08:32.195612 | orchestrator | Thursday 16 April 2026 06:08:18 +0000 (0:00:00.487) 0:01:37.835 ******** 2026-04-16 06:08:32.195617 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 06:08:32.195622 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195627 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 06:08:32.195645 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195652 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 06:08:32.195657 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-16 06:08:32.195662 | orchestrator | 2026-04-16 06:08:32.195666 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-16 06:08:32.195671 | orchestrator | Thursday 16 April 2026 06:08:26 +0000 (0:00:07.918) 0:01:45.754 ******** 2026-04-16 06:08:32.195676 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:08:32.195681 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195685 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195690 | orchestrator | 2026-04-16 06:08:32.195695 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-16 06:08:32.195700 | orchestrator | Thursday 16 April 2026 06:08:27 +0000 (0:00:00.340) 0:01:46.094 ******** 2026-04-16 06:08:32.195705 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-16 06:08:32.195714 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:08:32.195719 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 06:08:32.195723 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195728 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 06:08:32.195733 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195738 | orchestrator | 2026-04-16 06:08:32.195742 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-16 06:08:32.195747 | orchestrator | Thursday 16 April 2026 06:08:28 +0000 (0:00:01.079) 0:01:47.173 ******** 2026-04-16 06:08:32.195752 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195757 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195761 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195766 | orchestrator | 2026-04-16 06:08:32.195771 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-16 06:08:32.195776 | orchestrator | Thursday 16 April 2026 06:08:28 +0000 (0:00:00.472) 0:01:47.645 ******** 2026-04-16 06:08:32.195780 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195785 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195790 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:08:32.195794 | orchestrator | 2026-04-16 06:08:32.195799 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-16 06:08:32.195804 | orchestrator | Thursday 16 April 2026 06:08:29 +0000 (0:00:01.041) 0:01:48.687 ******** 2026-04-16 06:08:32.195809 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:08:32.195814 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:08:32.195822 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:09:50.250978 | orchestrator | 2026-04-16 06:09:50.251102 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-16 06:09:50.251122 | orchestrator | Thursday 16 April 2026 06:08:32 +0000 (0:00:02.560) 0:01:51.248 ******** 2026-04-16 06:09:50.251134 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:50.251146 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:50.251158 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:09:50.251170 | orchestrator | 2026-04-16 06:09:50.251181 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-16 06:09:50.251193 | orchestrator | Thursday 16 April 2026 06:08:53 +0000 (0:00:20.958) 0:02:12.206 ******** 2026-04-16 06:09:50.251203 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:50.251214 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:50.251225 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:09:50.251236 | orchestrator | 2026-04-16 06:09:50.251247 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-16 06:09:50.251258 | orchestrator | Thursday 16 April 2026 06:09:05 +0000 (0:00:12.133) 0:02:24.340 ******** 2026-04-16 06:09:50.251269 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:09:50.251280 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:50.251290 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:50.251301 | orchestrator | 2026-04-16 06:09:50.251312 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-16 06:09:50.251323 | orchestrator | Thursday 16 April 2026 06:09:06 +0000 (0:00:01.074) 0:02:25.414 ******** 2026-04-16 06:09:50.251334 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:50.251345 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:50.251355 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:09:50.251366 | orchestrator | 2026-04-16 06:09:50.251377 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-16 06:09:50.251388 | orchestrator | Thursday 16 April 2026 06:09:18 +0000 (0:00:12.319) 0:02:37.734 ******** 2026-04-16 06:09:50.251399 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:50.251409 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:50.251420 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:50.251430 | orchestrator | 2026-04-16 06:09:50.251441 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-16 06:09:50.251475 | orchestrator | Thursday 16 April 2026 06:09:19 +0000 (0:00:01.028) 0:02:38.762 ******** 2026-04-16 06:09:50.251487 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:50.251498 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:50.251509 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:50.251522 | orchestrator | 2026-04-16 06:09:50.251535 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-16 06:09:50.251547 | orchestrator | 2026-04-16 06:09:50.251560 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 06:09:50.251572 | orchestrator | Thursday 16 April 2026 06:09:19 +0000 (0:00:00.294) 0:02:39.057 ******** 2026-04-16 06:09:50.251585 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:09:50.251600 | orchestrator | 2026-04-16 06:09:50.251613 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-04-16 06:09:50.251626 | orchestrator | Thursday 16 April 2026 06:09:20 +0000 (0:00:00.703) 0:02:39.760 ******** 2026-04-16 06:09:50.251672 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-16 06:09:50.251685 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-04-16 06:09:50.251698 | orchestrator | 2026-04-16 06:09:50.251710 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-04-16 06:09:50.251722 | orchestrator | Thursday 16 April 2026 06:09:24 +0000 (0:00:03.973) 0:02:43.734 ******** 2026-04-16 06:09:50.251735 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-16 06:09:50.251844 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-16 06:09:50.251865 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-16 06:09:50.251876 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-16 06:09:50.251887 | orchestrator | 2026-04-16 06:09:50.251898 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-16 06:09:50.251909 | orchestrator | Thursday 16 April 2026 06:09:31 +0000 (0:00:06.666) 0:02:50.400 ******** 2026-04-16 06:09:50.251920 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:09:50.251931 | orchestrator | 2026-04-16 06:09:50.251941 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-16 06:09:50.251951 | orchestrator | Thursday 16 April 2026 06:09:34 +0000 (0:00:03.142) 0:02:53.543 ******** 2026-04-16 06:09:50.251962 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:09:50.251972 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-16 06:09:50.251984 | orchestrator | 2026-04-16 06:09:50.251994 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-16 06:09:50.252005 | orchestrator | Thursday 16 April 2026 06:09:38 +0000 (0:00:03.923) 0:02:57.466 ******** 2026-04-16 06:09:50.252015 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:09:50.252026 | orchestrator | 2026-04-16 06:09:50.252036 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-04-16 06:09:50.252047 | orchestrator | Thursday 16 April 2026 06:09:41 +0000 (0:00:03.290) 0:03:00.757 ******** 2026-04-16 06:09:50.252057 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-16 06:09:50.252068 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-04-16 06:09:50.252079 | orchestrator | 2026-04-16 06:09:50.252094 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-16 06:09:50.252125 | orchestrator | Thursday 16 April 2026 06:09:48 +0000 (0:00:07.304) 0:03:08.062 ******** 2026-04-16 06:09:50.252143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:50.252172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:50.252186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:50.252211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:54.605435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:54.605541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:54.605558 | orchestrator | 2026-04-16 06:09:54.605573 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-16 06:09:54.605586 | orchestrator | Thursday 16 April 2026 06:09:50 +0000 (0:00:01.243) 0:03:09.306 ******** 2026-04-16 06:09:54.605598 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:54.605611 | orchestrator | 2026-04-16 06:09:54.605623 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-16 06:09:54.605686 | orchestrator | Thursday 16 April 2026 06:09:50 +0000 (0:00:00.128) 0:03:09.434 ******** 2026-04-16 06:09:54.605698 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:54.605709 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:54.605720 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:54.605730 | orchestrator | 2026-04-16 06:09:54.605741 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-16 06:09:54.605752 | orchestrator | Thursday 16 April 2026 06:09:50 +0000 (0:00:00.271) 0:03:09.706 ******** 2026-04-16 06:09:54.605763 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:09:54.605774 | orchestrator | 2026-04-16 06:09:54.605785 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-16 06:09:54.605797 | orchestrator | Thursday 16 April 2026 06:09:51 +0000 (0:00:00.658) 0:03:10.365 ******** 2026-04-16 06:09:54.605809 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:54.605830 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:54.605848 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:54.605867 | orchestrator | 2026-04-16 06:09:54.605886 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 06:09:54.605903 | orchestrator | Thursday 16 April 2026 06:09:51 +0000 (0:00:00.488) 0:03:10.853 ******** 2026-04-16 06:09:54.605921 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:09:54.605942 | orchestrator | 2026-04-16 06:09:54.605962 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-16 06:09:54.605983 | orchestrator | Thursday 16 April 2026 06:09:52 +0000 (0:00:00.533) 0:03:11.386 ******** 2026-04-16 06:09:54.606088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:54.606161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:54.606179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:54.606194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:54.606219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:54.606246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:54.606259 | orchestrator | 2026-04-16 06:09:54.606281 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-16 06:09:56.229596 | orchestrator | Thursday 16 April 2026 06:09:54 +0000 (0:00:02.275) 0:03:13.662 ******** 2026-04-16 06:09:56.229805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:09:56.229839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:09:56.229858 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:56.229881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:09:56.229949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:09:56.229970 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:56.230090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:09:56.230121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:09:56.230153 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:56.230175 | orchestrator | 2026-04-16 06:09:56.230197 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-16 06:09:56.230220 | orchestrator | Thursday 16 April 2026 06:09:55 +0000 (0:00:00.854) 0:03:14.517 ******** 2026-04-16 06:09:56.230242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:09:56.230277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:09:56.230296 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:09:56.230340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:09:58.484790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:09:58.484914 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:09:58.484936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:09:58.485004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:09:58.485019 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:09:58.485031 | orchestrator | 2026-04-16 06:09:58.485043 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-16 06:09:58.485055 | orchestrator | Thursday 16 April 2026 06:09:56 +0000 (0:00:00.775) 0:03:15.292 ******** 2026-04-16 06:09:58.485082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:58.485118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:58.485133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:09:58.485159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:58.485173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:09:58.485192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:04.588971 | orchestrator | 2026-04-16 06:10:04.589081 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-16 06:10:04.589100 | orchestrator | Thursday 16 April 2026 06:09:58 +0000 (0:00:02.253) 0:03:17.545 ******** 2026-04-16 06:10:04.589119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:10:04.589158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:10:04.589187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:10:04.589282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:04.589299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:04.589320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:04.589331 | orchestrator | 2026-04-16 06:10:04.589343 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-16 06:10:04.589354 | orchestrator | Thursday 16 April 2026 06:10:03 +0000 (0:00:05.512) 0:03:23.058 ******** 2026-04-16 06:10:04.589372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:10:04.589384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:10:04.589396 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:04.589420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:10:08.572334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:10:08.572429 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:08.572450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-04-16 06:10:08.572480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:10:08.572492 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:08.572504 | orchestrator | 2026-04-16 06:10:08.572516 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-16 06:10:08.572528 | orchestrator | Thursday 16 April 2026 06:10:04 +0000 (0:00:00.593) 0:03:23.651 ******** 2026-04-16 06:10:08.572538 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:10:08.572549 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:10:08.572560 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:10:08.572571 | orchestrator | 2026-04-16 06:10:08.572582 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-04-16 06:10:08.572593 | orchestrator | Thursday 16 April 2026 06:10:06 +0000 (0:00:01.465) 0:03:25.117 ******** 2026-04-16 06:10:08.572603 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:08.572614 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:08.572624 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:08.572696 | orchestrator | 2026-04-16 06:10:08.572707 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-04-16 06:10:08.572718 | orchestrator | Thursday 16 April 2026 06:10:06 +0000 (0:00:00.308) 0:03:25.425 ******** 2026-04-16 06:10:08.572769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:10:08.572784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:10:08.572803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-04-16 06:10:08.572816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:08.572835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:08.572855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:45.820126 | orchestrator | 2026-04-16 06:10:45.820229 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 06:10:45.820246 | orchestrator | Thursday 16 April 2026 06:10:08 +0000 (0:00:01.802) 0:03:27.227 ******** 2026-04-16 06:10:45.820257 | orchestrator | 2026-04-16 06:10:45.820269 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 06:10:45.820280 | orchestrator | Thursday 16 April 2026 06:10:08 +0000 (0:00:00.136) 0:03:27.364 ******** 2026-04-16 06:10:45.820291 | orchestrator | 2026-04-16 06:10:45.820302 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 06:10:45.820313 | orchestrator | Thursday 16 April 2026 06:10:08 +0000 (0:00:00.132) 0:03:27.496 ******** 2026-04-16 06:10:45.820324 | orchestrator | 2026-04-16 06:10:45.820335 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-16 06:10:45.820345 | orchestrator | Thursday 16 April 2026 06:10:08 +0000 (0:00:00.134) 0:03:27.630 ******** 2026-04-16 06:10:45.820356 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:10:45.820368 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:10:45.820379 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:10:45.820390 | orchestrator | 2026-04-16 06:10:45.820401 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-16 06:10:45.820412 | orchestrator | Thursday 16 April 2026 06:10:26 +0000 (0:00:17.846) 0:03:45.477 ******** 2026-04-16 06:10:45.820423 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:10:45.820434 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:10:45.820445 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:10:45.820455 | orchestrator | 2026-04-16 06:10:45.820466 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-16 06:10:45.820477 | orchestrator | 2026-04-16 06:10:45.820488 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 06:10:45.820499 | orchestrator | Thursday 16 April 2026 06:10:34 +0000 (0:00:08.180) 0:03:53.657 ******** 2026-04-16 06:10:45.820511 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:10:45.820523 | orchestrator | 2026-04-16 06:10:45.820547 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 06:10:45.820559 | orchestrator | Thursday 16 April 2026 06:10:35 +0000 (0:00:01.132) 0:03:54.790 ******** 2026-04-16 06:10:45.820569 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:10:45.820602 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:10:45.820614 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:10:45.820625 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:45.820692 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:45.820704 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:45.820717 | orchestrator | 2026-04-16 06:10:45.820729 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-16 06:10:45.820742 | orchestrator | Thursday 16 April 2026 06:10:36 +0000 (0:00:00.703) 0:03:55.493 ******** 2026-04-16 06:10:45.820755 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:45.820772 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:45.820791 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:45.820810 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:10:45.820831 | orchestrator | 2026-04-16 06:10:45.820851 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 06:10:45.820871 | orchestrator | Thursday 16 April 2026 06:10:37 +0000 (0:00:00.794) 0:03:56.288 ******** 2026-04-16 06:10:45.820889 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-16 06:10:45.820902 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-16 06:10:45.820915 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-16 06:10:45.820928 | orchestrator | 2026-04-16 06:10:45.820941 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 06:10:45.820954 | orchestrator | Thursday 16 April 2026 06:10:38 +0000 (0:00:00.897) 0:03:57.186 ******** 2026-04-16 06:10:45.820966 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-04-16 06:10:45.820978 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-04-16 06:10:45.820990 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-04-16 06:10:45.821001 | orchestrator | 2026-04-16 06:10:45.821011 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 06:10:45.821022 | orchestrator | Thursday 16 April 2026 06:10:39 +0000 (0:00:01.166) 0:03:58.352 ******** 2026-04-16 06:10:45.821033 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-16 06:10:45.821044 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:10:45.821054 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-16 06:10:45.821065 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:10:45.821075 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-16 06:10:45.821086 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:10:45.821097 | orchestrator | 2026-04-16 06:10:45.821107 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-16 06:10:45.821118 | orchestrator | Thursday 16 April 2026 06:10:39 +0000 (0:00:00.512) 0:03:58.865 ******** 2026-04-16 06:10:45.821129 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 06:10:45.821140 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 06:10:45.821150 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 06:10:45.821161 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 06:10:45.821172 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:45.821182 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 06:10:45.821193 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 06:10:45.821204 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:45.821232 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 06:10:45.821244 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 06:10:45.821255 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:45.821265 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 06:10:45.821286 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 06:10:45.821297 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 06:10:45.821308 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 06:10:45.821318 | orchestrator | 2026-04-16 06:10:45.821329 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-16 06:10:45.821340 | orchestrator | Thursday 16 April 2026 06:10:41 +0000 (0:00:01.242) 0:04:00.108 ******** 2026-04-16 06:10:45.821351 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:45.821362 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:45.821372 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:45.821383 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:10:45.821394 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:10:45.821404 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:10:45.821415 | orchestrator | 2026-04-16 06:10:45.821426 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-16 06:10:45.821437 | orchestrator | Thursday 16 April 2026 06:10:42 +0000 (0:00:01.132) 0:04:01.240 ******** 2026-04-16 06:10:45.821447 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:45.821458 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:45.821469 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:45.821479 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:10:45.821490 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:10:45.821500 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:10:45.821511 | orchestrator | 2026-04-16 06:10:45.821522 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-16 06:10:45.821533 | orchestrator | Thursday 16 April 2026 06:10:43 +0000 (0:00:01.673) 0:04:02.913 ******** 2026-04-16 06:10:45.821552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:10:45.821569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:10:45.821589 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431907 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:47.431950 | orchestrator | 2026-04-16 06:10:47.431963 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 06:10:47.431975 | orchestrator | Thursday 16 April 2026 06:10:46 +0000 (0:00:02.347) 0:04:05.260 ******** 2026-04-16 06:10:47.431987 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:10:47.431999 | orchestrator | 2026-04-16 06:10:47.432010 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-16 06:10:47.432029 | orchestrator | Thursday 16 April 2026 06:10:47 +0000 (0:00:01.234) 0:04:06.495 ******** 2026-04-16 06:10:50.499310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499425 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499440 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:50.499595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:52.450622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:52.450730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:52.450739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:10:52.450745 | orchestrator | 2026-04-16 06:10:52.450751 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-16 06:10:52.450757 | orchestrator | Thursday 16 April 2026 06:10:50 +0000 (0:00:03.547) 0:04:10.043 ******** 2026-04-16 06:10:52.450776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:10:52.450782 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:10:52.450800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:10:52.450805 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:10:52.450814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:10:52.450819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:10:52.450824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:10:52.450833 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:10:52.450838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:10:52.450847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:10:53.888333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:10:53.888435 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:10:53.888467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:10:53.888479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:10:53.888512 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:53.888523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:10:53.888537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:10:53.888547 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:53.888558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:10:53.888591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:10:53.888605 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:53.888614 | orchestrator | 2026-04-16 06:10:53.888625 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-16 06:10:53.888682 | orchestrator | Thursday 16 April 2026 06:10:52 +0000 (0:00:01.565) 0:04:11.609 ******** 2026-04-16 06:10:53.888701 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:10:53.888726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:10:53.888734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:10:53.888741 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:10:53.888747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:10:53.888762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:10:58.102861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:10:58.103024 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:10:58.103055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:10:58.103078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:10:58.103099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:10:58.103116 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:10:58.103129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:10:58.103161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:10:58.103173 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:58.103193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:10:58.103214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:10:58.103225 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:58.103236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:10:58.103247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:10:58.103258 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:58.103269 | orchestrator | 2026-04-16 06:10:58.103281 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 06:10:58.103297 | orchestrator | Thursday 16 April 2026 06:10:54 +0000 (0:00:01.983) 0:04:13.593 ******** 2026-04-16 06:10:58.103309 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:10:58.103323 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:10:58.103335 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:10:58.103348 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:10:58.103360 | orchestrator | 2026-04-16 06:10:58.103373 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-16 06:10:58.103386 | orchestrator | Thursday 16 April 2026 06:10:55 +0000 (0:00:01.089) 0:04:14.683 ******** 2026-04-16 06:10:58.103399 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:10:58.103411 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:10:58.103422 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:10:58.103433 | orchestrator | 2026-04-16 06:10:58.103444 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-16 06:10:58.103455 | orchestrator | Thursday 16 April 2026 06:10:56 +0000 (0:00:01.082) 0:04:15.765 ******** 2026-04-16 06:10:58.103465 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:10:58.103476 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:10:58.103487 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:10:58.103498 | orchestrator | 2026-04-16 06:10:58.103508 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-16 06:10:58.103526 | orchestrator | Thursday 16 April 2026 06:10:57 +0000 (0:00:00.878) 0:04:16.644 ******** 2026-04-16 06:10:58.103537 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:10:58.103548 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:10:58.103558 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:10:58.103569 | orchestrator | 2026-04-16 06:10:58.103586 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-16 06:11:18.870309 | orchestrator | Thursday 16 April 2026 06:10:58 +0000 (0:00:00.520) 0:04:17.164 ******** 2026-04-16 06:11:18.870394 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:11:18.870402 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:11:18.870408 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:11:18.870413 | orchestrator | 2026-04-16 06:11:18.870419 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-16 06:11:18.870424 | orchestrator | Thursday 16 April 2026 06:10:58 +0000 (0:00:00.495) 0:04:17.659 ******** 2026-04-16 06:11:18.870430 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-16 06:11:18.870436 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-16 06:11:18.870441 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-16 06:11:18.870445 | orchestrator | 2026-04-16 06:11:18.870451 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-16 06:11:18.870467 | orchestrator | Thursday 16 April 2026 06:10:59 +0000 (0:00:01.391) 0:04:19.051 ******** 2026-04-16 06:11:18.870472 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-16 06:11:18.870478 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-16 06:11:18.870482 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-16 06:11:18.870487 | orchestrator | 2026-04-16 06:11:18.870492 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-16 06:11:18.870497 | orchestrator | Thursday 16 April 2026 06:11:01 +0000 (0:00:01.192) 0:04:20.244 ******** 2026-04-16 06:11:18.870502 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-16 06:11:18.870507 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-16 06:11:18.870512 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-16 06:11:18.870516 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-04-16 06:11:18.870521 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-04-16 06:11:18.870526 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-04-16 06:11:18.870531 | orchestrator | 2026-04-16 06:11:18.870536 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-16 06:11:18.870540 | orchestrator | Thursday 16 April 2026 06:11:04 +0000 (0:00:03.601) 0:04:23.845 ******** 2026-04-16 06:11:18.870545 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:18.870551 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:18.870556 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:18.870560 | orchestrator | 2026-04-16 06:11:18.870565 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-16 06:11:18.870570 | orchestrator | Thursday 16 April 2026 06:11:05 +0000 (0:00:00.313) 0:04:24.159 ******** 2026-04-16 06:11:18.870575 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:18.870580 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:18.870585 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:18.870590 | orchestrator | 2026-04-16 06:11:18.870595 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-16 06:11:18.870600 | orchestrator | Thursday 16 April 2026 06:11:05 +0000 (0:00:00.529) 0:04:24.689 ******** 2026-04-16 06:11:18.870605 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:11:18.870609 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:11:18.870614 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:11:18.870619 | orchestrator | 2026-04-16 06:11:18.870624 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-16 06:11:18.870693 | orchestrator | Thursday 16 April 2026 06:11:06 +0000 (0:00:01.197) 0:04:25.887 ******** 2026-04-16 06:11:18.870703 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-16 06:11:18.870713 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-16 06:11:18.870720 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-04-16 06:11:18.870727 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-16 06:11:18.870735 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-16 06:11:18.870743 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-04-16 06:11:18.870750 | orchestrator | 2026-04-16 06:11:18.870758 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-16 06:11:18.870765 | orchestrator | Thursday 16 April 2026 06:11:09 +0000 (0:00:03.114) 0:04:29.001 ******** 2026-04-16 06:11:18.870774 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 06:11:18.870781 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 06:11:18.870790 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 06:11:18.870798 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-04-16 06:11:18.870806 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:11:18.870814 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-04-16 06:11:18.870822 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:11:18.870830 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-04-16 06:11:18.870838 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:11:18.870845 | orchestrator | 2026-04-16 06:11:18.870852 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-16 06:11:18.870860 | orchestrator | Thursday 16 April 2026 06:11:13 +0000 (0:00:03.287) 0:04:32.288 ******** 2026-04-16 06:11:18.870868 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:18.870877 | orchestrator | 2026-04-16 06:11:18.870901 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-16 06:11:18.870910 | orchestrator | Thursday 16 April 2026 06:11:13 +0000 (0:00:00.128) 0:04:32.417 ******** 2026-04-16 06:11:18.870919 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:18.870925 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:18.870931 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:18.870939 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:18.870947 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:18.870955 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:18.870963 | orchestrator | 2026-04-16 06:11:18.870972 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-16 06:11:18.870990 | orchestrator | Thursday 16 April 2026 06:11:14 +0000 (0:00:00.780) 0:04:33.197 ******** 2026-04-16 06:11:18.870999 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:11:18.871008 | orchestrator | 2026-04-16 06:11:18.871023 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-16 06:11:18.871033 | orchestrator | Thursday 16 April 2026 06:11:14 +0000 (0:00:00.672) 0:04:33.869 ******** 2026-04-16 06:11:18.871040 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:18.871046 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:18.871051 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:18.871057 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:18.871063 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:18.871068 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:18.871073 | orchestrator | 2026-04-16 06:11:18.871078 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-16 06:11:18.871089 | orchestrator | Thursday 16 April 2026 06:11:15 +0000 (0:00:00.814) 0:04:34.684 ******** 2026-04-16 06:11:18.871097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:11:18.871105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:11:18.871110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:11:18.871122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168740 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168764 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168784 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:23.168804 | orchestrator | 2026-04-16 06:11:23.168824 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-16 06:11:23.168846 | orchestrator | Thursday 16 April 2026 06:11:19 +0000 (0:00:03.455) 0:04:38.140 ******** 2026-04-16 06:11:23.168879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:11:25.145914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:11:25.146182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:11:25.146211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:11:25.146223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:11:25.146234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:11:25.146267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:25.146296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:25.146307 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:25.146317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:25.146328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:25.146339 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:25.146357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:42.204831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:42.204936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:42.204950 | orchestrator | 2026-04-16 06:11:42.204961 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-16 06:11:42.204971 | orchestrator | Thursday 16 April 2026 06:11:25 +0000 (0:00:06.071) 0:04:44.211 ******** 2026-04-16 06:11:42.204980 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:42.204990 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:42.204999 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:42.205008 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:42.205016 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:42.205024 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:42.205033 | orchestrator | 2026-04-16 06:11:42.205042 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-16 06:11:42.205051 | orchestrator | Thursday 16 April 2026 06:11:26 +0000 (0:00:01.220) 0:04:45.432 ******** 2026-04-16 06:11:42.205059 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 06:11:42.205069 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 06:11:42.205077 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 06:11:42.205086 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 06:11:42.205094 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 06:11:42.205103 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 06:11:42.205112 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 06:11:42.205121 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:42.205130 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 06:11:42.205138 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:42.205147 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 06:11:42.205155 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:42.205164 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 06:11:42.205191 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 06:11:42.205200 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 06:11:42.205209 | orchestrator | 2026-04-16 06:11:42.205218 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-16 06:11:42.205227 | orchestrator | Thursday 16 April 2026 06:11:29 +0000 (0:00:03.391) 0:04:48.823 ******** 2026-04-16 06:11:42.205235 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:42.205257 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:42.205267 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:42.205275 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:42.205292 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:42.205301 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:42.205310 | orchestrator | 2026-04-16 06:11:42.205318 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-16 06:11:42.205327 | orchestrator | Thursday 16 April 2026 06:11:30 +0000 (0:00:00.593) 0:04:49.417 ******** 2026-04-16 06:11:42.205336 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 06:11:42.205345 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 06:11:42.205355 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 06:11:42.205364 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 06:11:42.205390 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 06:11:42.205405 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 06:11:42.205415 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 06:11:42.205425 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 06:11:42.205436 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 06:11:42.205446 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 06:11:42.205456 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:42.205466 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 06:11:42.205476 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:42.205486 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 06:11:42.205494 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:42.205503 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 06:11:42.205511 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 06:11:42.205520 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 06:11:42.205529 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 06:11:42.205537 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 06:11:42.205545 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 06:11:42.205554 | orchestrator | 2026-04-16 06:11:42.205563 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-16 06:11:42.205579 | orchestrator | Thursday 16 April 2026 06:11:35 +0000 (0:00:05.073) 0:04:54.491 ******** 2026-04-16 06:11:42.205588 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 06:11:42.205597 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 06:11:42.205605 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 06:11:42.205614 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 06:11:42.205622 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 06:11:42.205631 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 06:11:42.205659 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 06:11:42.205668 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 06:11:42.205677 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 06:11:42.205685 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 06:11:42.205694 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 06:11:42.205703 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 06:11:42.205711 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 06:11:42.205720 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:42.205728 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 06:11:42.205737 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 06:11:42.205746 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:42.205754 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 06:11:42.205763 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:42.205772 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 06:11:42.205781 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 06:11:42.205789 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 06:11:42.205798 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 06:11:42.205806 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 06:11:42.205815 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 06:11:42.205829 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 06:11:46.745257 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 06:11:46.745369 | orchestrator | 2026-04-16 06:11:46.745385 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-16 06:11:46.745398 | orchestrator | Thursday 16 April 2026 06:11:42 +0000 (0:00:06.760) 0:05:01.251 ******** 2026-04-16 06:11:46.745409 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:46.745422 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:46.745433 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:46.745444 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:46.745454 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:46.745465 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:46.745476 | orchestrator | 2026-04-16 06:11:46.745488 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-16 06:11:46.745519 | orchestrator | Thursday 16 April 2026 06:11:42 +0000 (0:00:00.782) 0:05:02.034 ******** 2026-04-16 06:11:46.745531 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:46.745542 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:46.745553 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:46.745563 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:46.745574 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:46.745585 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:46.745596 | orchestrator | 2026-04-16 06:11:46.745607 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-16 06:11:46.745623 | orchestrator | Thursday 16 April 2026 06:11:43 +0000 (0:00:00.644) 0:05:02.679 ******** 2026-04-16 06:11:46.745723 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:46.745744 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:11:46.745761 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:46.745779 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:46.745796 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:11:46.745814 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:11:46.745832 | orchestrator | 2026-04-16 06:11:46.745851 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-16 06:11:46.745872 | orchestrator | Thursday 16 April 2026 06:11:45 +0000 (0:00:02.053) 0:05:04.732 ******** 2026-04-16 06:11:46.745897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:11:46.745921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:11:46.745936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:11:46.745950 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:46.745993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:11:46.746098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:11:46.746125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:11:46.746143 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:46.746162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 06:11:46.746183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 06:11:46.746225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 06:11:50.109860 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:50.109995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:11:50.110105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:11:50.110129 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:50.110151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:11:50.110171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:11:50.110189 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:50.110208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 06:11:50.110227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:11:50.110278 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:50.110300 | orchestrator | 2026-04-16 06:11:50.110321 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-16 06:11:50.110343 | orchestrator | Thursday 16 April 2026 06:11:46 +0000 (0:00:01.320) 0:05:06.053 ******** 2026-04-16 06:11:50.110381 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-16 06:11:50.110428 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-16 06:11:50.110451 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:50.110471 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-16 06:11:50.110490 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-16 06:11:50.110508 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:50.110528 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-16 06:11:50.110548 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-16 06:11:50.110567 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:50.110586 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-16 06:11:50.110605 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-16 06:11:50.110625 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:50.110709 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-16 06:11:50.110730 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-16 06:11:50.110748 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:50.110767 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-16 06:11:50.110785 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-16 06:11:50.110802 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:50.110819 | orchestrator | 2026-04-16 06:11:50.110836 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-04-16 06:11:50.110855 | orchestrator | Thursday 16 April 2026 06:11:47 +0000 (0:00:00.840) 0:05:06.893 ******** 2026-04-16 06:11:50.110876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:11:50.110897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:11:50.110938 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 06:11:50.110990 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162395 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162409 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:11:52.162573 | orchestrator | 2026-04-16 06:11:52.162586 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 06:11:52.162599 | orchestrator | Thursday 16 April 2026 06:11:50 +0000 (0:00:02.524) 0:05:09.418 ******** 2026-04-16 06:11:52.162610 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:11:52.162622 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:11:52.162633 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:11:52.162686 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:11:52.162698 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:11:52.162708 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:11:52.162719 | orchestrator | 2026-04-16 06:11:52.162731 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 06:11:52.162742 | orchestrator | Thursday 16 April 2026 06:11:51 +0000 (0:00:00.809) 0:05:10.227 ******** 2026-04-16 06:11:52.162752 | orchestrator | 2026-04-16 06:11:52.162763 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 06:11:52.162774 | orchestrator | Thursday 16 April 2026 06:11:51 +0000 (0:00:00.155) 0:05:10.382 ******** 2026-04-16 06:11:52.162792 | orchestrator | 2026-04-16 06:11:52.162805 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 06:11:52.162818 | orchestrator | Thursday 16 April 2026 06:11:51 +0000 (0:00:00.133) 0:05:10.516 ******** 2026-04-16 06:11:52.162831 | orchestrator | 2026-04-16 06:11:52.162843 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 06:11:52.162862 | orchestrator | Thursday 16 April 2026 06:11:51 +0000 (0:00:00.132) 0:05:10.649 ******** 2026-04-16 06:15:01.257252 | orchestrator | 2026-04-16 06:15:01.257370 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 06:15:01.257388 | orchestrator | Thursday 16 April 2026 06:11:51 +0000 (0:00:00.132) 0:05:10.781 ******** 2026-04-16 06:15:01.257400 | orchestrator | 2026-04-16 06:15:01.257411 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 06:15:01.257422 | orchestrator | Thursday 16 April 2026 06:11:52 +0000 (0:00:00.293) 0:05:11.075 ******** 2026-04-16 06:15:01.257433 | orchestrator | 2026-04-16 06:15:01.257444 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-16 06:15:01.257455 | orchestrator | Thursday 16 April 2026 06:11:52 +0000 (0:00:00.137) 0:05:11.212 ******** 2026-04-16 06:15:01.257466 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:15:01.257478 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:15:01.257489 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:15:01.257500 | orchestrator | 2026-04-16 06:15:01.257511 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-16 06:15:01.257522 | orchestrator | Thursday 16 April 2026 06:11:58 +0000 (0:00:06.491) 0:05:17.703 ******** 2026-04-16 06:15:01.257533 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:15:01.257544 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:15:01.257555 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:15:01.257591 | orchestrator | 2026-04-16 06:15:01.257603 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-16 06:15:01.257614 | orchestrator | Thursday 16 April 2026 06:12:16 +0000 (0:00:17.888) 0:05:35.592 ******** 2026-04-16 06:15:01.257625 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:15:01.257636 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:15:01.257646 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:15:01.257657 | orchestrator | 2026-04-16 06:15:01.257668 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-16 06:15:01.257679 | orchestrator | Thursday 16 April 2026 06:12:41 +0000 (0:00:25.440) 0:06:01.033 ******** 2026-04-16 06:15:01.257755 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:15:01.257767 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:15:01.257780 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:15:01.257793 | orchestrator | 2026-04-16 06:15:01.257806 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-16 06:15:01.257820 | orchestrator | Thursday 16 April 2026 06:13:20 +0000 (0:00:38.886) 0:06:39.919 ******** 2026-04-16 06:15:01.257833 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-16 06:15:01.257847 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-04-16 06:15:01.257860 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-16 06:15:01.257873 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:15:01.257885 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:15:01.257898 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:15:01.257910 | orchestrator | 2026-04-16 06:15:01.257923 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-16 06:15:01.257936 | orchestrator | Thursday 16 April 2026 06:13:27 +0000 (0:00:06.187) 0:06:46.107 ******** 2026-04-16 06:15:01.257949 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:15:01.257962 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:15:01.257975 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:15:01.257987 | orchestrator | 2026-04-16 06:15:01.258001 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-16 06:15:01.258074 | orchestrator | Thursday 16 April 2026 06:13:27 +0000 (0:00:00.738) 0:06:46.845 ******** 2026-04-16 06:15:01.258090 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:15:01.258105 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:15:01.258124 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:15:01.258143 | orchestrator | 2026-04-16 06:15:01.258163 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-16 06:15:01.258181 | orchestrator | Thursday 16 April 2026 06:13:57 +0000 (0:00:29.694) 0:07:16.540 ******** 2026-04-16 06:15:01.258198 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:15:01.258215 | orchestrator | 2026-04-16 06:15:01.258234 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-16 06:15:01.258254 | orchestrator | Thursday 16 April 2026 06:13:57 +0000 (0:00:00.125) 0:07:16.666 ******** 2026-04-16 06:15:01.258274 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:15:01.258294 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:01.258312 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:15:01.258323 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:01.258334 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:01.258345 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-04-16 06:15:01.258358 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 06:15:01.258369 | orchestrator | 2026-04-16 06:15:01.258380 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-16 06:15:01.258390 | orchestrator | Thursday 16 April 2026 06:14:19 +0000 (0:00:21.766) 0:07:38.432 ******** 2026-04-16 06:15:01.258414 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:15:01.258424 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:15:01.258435 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:15:01.258446 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:01.258456 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:01.258467 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:01.258478 | orchestrator | 2026-04-16 06:15:01.258502 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-16 06:15:01.258513 | orchestrator | Thursday 16 April 2026 06:14:27 +0000 (0:00:07.700) 0:07:46.133 ******** 2026-04-16 06:15:01.258524 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:01.258536 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:15:01.258546 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:15:01.258557 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:01.258567 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:01.258597 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-04-16 06:15:01.258609 | orchestrator | 2026-04-16 06:15:01.258620 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-16 06:15:01.258631 | orchestrator | Thursday 16 April 2026 06:14:30 +0000 (0:00:03.483) 0:07:49.616 ******** 2026-04-16 06:15:01.258641 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 06:15:01.258652 | orchestrator | 2026-04-16 06:15:01.258663 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-16 06:15:01.258674 | orchestrator | Thursday 16 April 2026 06:14:42 +0000 (0:00:12.306) 0:08:01.922 ******** 2026-04-16 06:15:01.258734 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 06:15:01.258746 | orchestrator | 2026-04-16 06:15:01.258757 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-16 06:15:01.258768 | orchestrator | Thursday 16 April 2026 06:14:44 +0000 (0:00:01.492) 0:08:03.415 ******** 2026-04-16 06:15:01.258779 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:15:01.258790 | orchestrator | 2026-04-16 06:15:01.258801 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-16 06:15:01.258812 | orchestrator | Thursday 16 April 2026 06:14:46 +0000 (0:00:01.665) 0:08:05.080 ******** 2026-04-16 06:15:01.258823 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 06:15:01.258834 | orchestrator | 2026-04-16 06:15:01.258845 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-04-16 06:15:01.258856 | orchestrator | Thursday 16 April 2026 06:14:57 +0000 (0:00:11.407) 0:08:16.487 ******** 2026-04-16 06:15:01.258867 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:15:01.258879 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:15:01.258889 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:15:01.258900 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:01.258911 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:01.258921 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:01.258932 | orchestrator | 2026-04-16 06:15:01.258943 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-16 06:15:01.258954 | orchestrator | 2026-04-16 06:15:01.258965 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-16 06:15:01.258976 | orchestrator | Thursday 16 April 2026 06:14:59 +0000 (0:00:01.741) 0:08:18.229 ******** 2026-04-16 06:15:01.258987 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:15:01.258997 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:15:01.259008 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:15:01.259019 | orchestrator | 2026-04-16 06:15:01.259030 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-16 06:15:01.259041 | orchestrator | 2026-04-16 06:15:01.259052 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-16 06:15:01.259063 | orchestrator | Thursday 16 April 2026 06:15:00 +0000 (0:00:00.891) 0:08:19.120 ******** 2026-04-16 06:15:01.259073 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:01.259093 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:01.259103 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:01.259114 | orchestrator | 2026-04-16 06:15:01.259125 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-16 06:15:01.259136 | orchestrator | 2026-04-16 06:15:01.259147 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-16 06:15:01.259158 | orchestrator | Thursday 16 April 2026 06:15:00 +0000 (0:00:00.673) 0:08:19.794 ******** 2026-04-16 06:15:01.259169 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-16 06:15:01.259180 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-16 06:15:01.259191 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-16 06:15:01.259202 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-16 06:15:01.259213 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-16 06:15:01.259223 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-16 06:15:01.259234 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:15:01.259245 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-16 06:15:01.259256 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-16 06:15:01.259267 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-16 06:15:01.259278 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-16 06:15:01.259288 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-16 06:15:01.259299 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-16 06:15:01.259310 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:15:01.259321 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-16 06:15:01.259331 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-16 06:15:01.259342 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-16 06:15:01.259353 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-16 06:15:01.259364 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-16 06:15:01.259374 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-16 06:15:01.259385 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:15:01.259396 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-16 06:15:01.259407 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-16 06:15:01.259418 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-16 06:15:01.259434 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-16 06:15:01.259446 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-16 06:15:01.259456 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-16 06:15:01.259467 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:01.259478 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-16 06:15:01.259495 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-16 06:15:04.169024 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-16 06:15:04.169145 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-16 06:15:04.169161 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-16 06:15:04.169173 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-16 06:15:04.169184 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:04.169194 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-16 06:15:04.169204 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-16 06:15:04.169214 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-16 06:15:04.169223 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-16 06:15:04.169257 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-16 06:15:04.169267 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-16 06:15:04.169276 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:04.169286 | orchestrator | 2026-04-16 06:15:04.169296 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-16 06:15:04.169306 | orchestrator | 2026-04-16 06:15:04.169316 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-16 06:15:04.169326 | orchestrator | Thursday 16 April 2026 06:15:01 +0000 (0:00:01.275) 0:08:21.070 ******** 2026-04-16 06:15:04.169336 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-16 06:15:04.169346 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-16 06:15:04.169356 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:04.169365 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-16 06:15:04.169375 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-16 06:15:04.169384 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:04.169394 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-16 06:15:04.169403 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-16 06:15:04.169413 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:04.169422 | orchestrator | 2026-04-16 06:15:04.169432 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-16 06:15:04.169441 | orchestrator | 2026-04-16 06:15:04.169451 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-16 06:15:04.169461 | orchestrator | Thursday 16 April 2026 06:15:02 +0000 (0:00:00.534) 0:08:21.604 ******** 2026-04-16 06:15:04.169470 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:04.169479 | orchestrator | 2026-04-16 06:15:04.169489 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-16 06:15:04.169499 | orchestrator | 2026-04-16 06:15:04.169508 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-16 06:15:04.169518 | orchestrator | Thursday 16 April 2026 06:15:03 +0000 (0:00:00.835) 0:08:22.439 ******** 2026-04-16 06:15:04.169527 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:04.169537 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:04.169547 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:04.169556 | orchestrator | 2026-04-16 06:15:04.169566 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:15:04.169576 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:15:04.169589 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-04-16 06:15:04.169600 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-16 06:15:04.169609 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-04-16 06:15:04.169619 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-16 06:15:04.169628 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 06:15:04.169638 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 06:15:04.169647 | orchestrator | 2026-04-16 06:15:04.169657 | orchestrator | 2026-04-16 06:15:04.169667 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:15:04.169676 | orchestrator | Thursday 16 April 2026 06:15:03 +0000 (0:00:00.423) 0:08:22.863 ******** 2026-04-16 06:15:04.169722 | orchestrator | =============================================================================== 2026-04-16 06:15:04.169733 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.89s 2026-04-16 06:15:04.169743 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.76s 2026-04-16 06:15:04.169769 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.69s 2026-04-16 06:15:04.169786 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.44s 2026-04-16 06:15:04.169803 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.77s 2026-04-16 06:15:04.169819 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.96s 2026-04-16 06:15:04.169857 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.89s 2026-04-16 06:15:04.169876 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.85s 2026-04-16 06:15:04.169893 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.72s 2026-04-16 06:15:04.169908 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.21s 2026-04-16 06:15:04.169926 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.32s 2026-04-16 06:15:04.169936 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.31s 2026-04-16 06:15:04.169946 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.13s 2026-04-16 06:15:04.169956 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.41s 2026-04-16 06:15:04.169965 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.06s 2026-04-16 06:15:04.169975 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.18s 2026-04-16 06:15:04.169984 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.92s 2026-04-16 06:15:04.169994 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.70s 2026-04-16 06:15:04.170003 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.30s 2026-04-16 06:15:04.170013 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.76s 2026-04-16 06:15:06.443930 | orchestrator | 2026-04-16 06:15:06 | INFO  | Task 900c3857-f764-4c1d-ad01-b922ba992060 (horizon) was prepared for execution. 2026-04-16 06:15:06.444062 | orchestrator | 2026-04-16 06:15:06 | INFO  | It takes a moment until task 900c3857-f764-4c1d-ad01-b922ba992060 (horizon) has been started and output is visible here. 2026-04-16 06:15:12.766330 | orchestrator | 2026-04-16 06:15:12.766446 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:15:12.766463 | orchestrator | 2026-04-16 06:15:12.766475 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:15:12.766487 | orchestrator | Thursday 16 April 2026 06:15:10 +0000 (0:00:00.188) 0:00:00.188 ******** 2026-04-16 06:15:12.766498 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:12.766510 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:12.766521 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:12.766532 | orchestrator | 2026-04-16 06:15:12.766543 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:15:12.766554 | orchestrator | Thursday 16 April 2026 06:15:10 +0000 (0:00:00.233) 0:00:00.422 ******** 2026-04-16 06:15:12.766566 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-16 06:15:12.766578 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-16 06:15:12.766589 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-16 06:15:12.766600 | orchestrator | 2026-04-16 06:15:12.766611 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-16 06:15:12.766622 | orchestrator | 2026-04-16 06:15:12.766633 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 06:15:12.766669 | orchestrator | Thursday 16 April 2026 06:15:10 +0000 (0:00:00.310) 0:00:00.732 ******** 2026-04-16 06:15:12.766681 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:15:12.766746 | orchestrator | 2026-04-16 06:15:12.766757 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-16 06:15:12.766768 | orchestrator | Thursday 16 April 2026 06:15:11 +0000 (0:00:00.442) 0:00:01.174 ******** 2026-04-16 06:15:12.766801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:15:12.766842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:15:12.766873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:15:12.766888 | orchestrator | 2026-04-16 06:15:12.766901 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-16 06:15:12.766914 | orchestrator | Thursday 16 April 2026 06:15:12 +0000 (0:00:01.034) 0:00:02.209 ******** 2026-04-16 06:15:12.766927 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:12.766940 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:12.766952 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:12.766964 | orchestrator | 2026-04-16 06:15:12.766977 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 06:15:12.766990 | orchestrator | Thursday 16 April 2026 06:15:12 +0000 (0:00:00.332) 0:00:02.542 ******** 2026-04-16 06:15:12.767009 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-16 06:15:18.000668 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-16 06:15:18.000858 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-16 06:15:18.000878 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-16 06:15:18.000910 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-16 06:15:18.000937 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-16 06:15:18.001736 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-16 06:15:18.001758 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-16 06:15:18.001772 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-16 06:15:18.001783 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-16 06:15:18.001793 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-16 06:15:18.001804 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-16 06:15:18.001815 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-16 06:15:18.001826 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-16 06:15:18.001837 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-16 06:15:18.001847 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-16 06:15:18.001858 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-16 06:15:18.001869 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-16 06:15:18.001879 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-16 06:15:18.001890 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-16 06:15:18.001901 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-16 06:15:18.001911 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-16 06:15:18.001922 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-16 06:15:18.001932 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-16 06:15:18.001945 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-16 06:15:18.001973 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-16 06:15:18.001985 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-16 06:15:18.001996 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-16 06:15:18.002007 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-16 06:15:18.002068 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-16 06:15:18.002081 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-16 06:15:18.002092 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-16 06:15:18.002103 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-16 06:15:18.002130 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-16 06:15:18.002150 | orchestrator | 2026-04-16 06:15:18.002162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.002175 | orchestrator | Thursday 16 April 2026 06:15:13 +0000 (0:00:00.632) 0:00:03.174 ******** 2026-04-16 06:15:18.002186 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:18.002198 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:18.002209 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:18.002220 | orchestrator | 2026-04-16 06:15:18.002231 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:18.002242 | orchestrator | Thursday 16 April 2026 06:15:13 +0000 (0:00:00.267) 0:00:03.442 ******** 2026-04-16 06:15:18.002253 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002265 | orchestrator | 2026-04-16 06:15:18.002295 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:18.002307 | orchestrator | Thursday 16 April 2026 06:15:13 +0000 (0:00:00.201) 0:00:03.644 ******** 2026-04-16 06:15:18.002318 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002329 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:18.002339 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:18.002350 | orchestrator | 2026-04-16 06:15:18.002361 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.002372 | orchestrator | Thursday 16 April 2026 06:15:14 +0000 (0:00:00.271) 0:00:03.915 ******** 2026-04-16 06:15:18.002383 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:18.002393 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:18.002404 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:18.002415 | orchestrator | 2026-04-16 06:15:18.002426 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:18.002437 | orchestrator | Thursday 16 April 2026 06:15:14 +0000 (0:00:00.260) 0:00:04.175 ******** 2026-04-16 06:15:18.002448 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002459 | orchestrator | 2026-04-16 06:15:18.002470 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:18.002481 | orchestrator | Thursday 16 April 2026 06:15:14 +0000 (0:00:00.111) 0:00:04.287 ******** 2026-04-16 06:15:18.002492 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002503 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:18.002514 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:18.002525 | orchestrator | 2026-04-16 06:15:18.002536 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.002547 | orchestrator | Thursday 16 April 2026 06:15:14 +0000 (0:00:00.268) 0:00:04.555 ******** 2026-04-16 06:15:18.002557 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:18.002568 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:18.002579 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:18.002590 | orchestrator | 2026-04-16 06:15:18.002601 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:18.002612 | orchestrator | Thursday 16 April 2026 06:15:15 +0000 (0:00:00.475) 0:00:05.031 ******** 2026-04-16 06:15:18.002623 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002633 | orchestrator | 2026-04-16 06:15:18.002644 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:18.002655 | orchestrator | Thursday 16 April 2026 06:15:15 +0000 (0:00:00.127) 0:00:05.159 ******** 2026-04-16 06:15:18.002666 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002677 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:18.002712 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:18.002723 | orchestrator | 2026-04-16 06:15:18.002734 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.002745 | orchestrator | Thursday 16 April 2026 06:15:15 +0000 (0:00:00.294) 0:00:05.453 ******** 2026-04-16 06:15:18.002764 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:18.002775 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:18.002786 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:18.002796 | orchestrator | 2026-04-16 06:15:18.002807 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:18.002818 | orchestrator | Thursday 16 April 2026 06:15:15 +0000 (0:00:00.320) 0:00:05.774 ******** 2026-04-16 06:15:18.002829 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002839 | orchestrator | 2026-04-16 06:15:18.002850 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:18.002867 | orchestrator | Thursday 16 April 2026 06:15:16 +0000 (0:00:00.125) 0:00:05.900 ******** 2026-04-16 06:15:18.002879 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.002889 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:18.002900 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:18.002911 | orchestrator | 2026-04-16 06:15:18.002921 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.002932 | orchestrator | Thursday 16 April 2026 06:15:16 +0000 (0:00:00.456) 0:00:06.356 ******** 2026-04-16 06:15:18.002943 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:18.002954 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:18.002964 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:18.002975 | orchestrator | 2026-04-16 06:15:18.002986 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:18.002997 | orchestrator | Thursday 16 April 2026 06:15:16 +0000 (0:00:00.284) 0:00:06.641 ******** 2026-04-16 06:15:18.003007 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.003018 | orchestrator | 2026-04-16 06:15:18.003029 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:18.003040 | orchestrator | Thursday 16 April 2026 06:15:16 +0000 (0:00:00.109) 0:00:06.751 ******** 2026-04-16 06:15:18.003050 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.003061 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:18.003072 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:18.003083 | orchestrator | 2026-04-16 06:15:18.003094 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.003105 | orchestrator | Thursday 16 April 2026 06:15:17 +0000 (0:00:00.259) 0:00:07.010 ******** 2026-04-16 06:15:18.003116 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:18.003126 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:18.003137 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:18.003147 | orchestrator | 2026-04-16 06:15:18.003158 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:18.003169 | orchestrator | Thursday 16 April 2026 06:15:17 +0000 (0:00:00.281) 0:00:07.291 ******** 2026-04-16 06:15:18.003179 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.003190 | orchestrator | 2026-04-16 06:15:18.003201 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:18.003212 | orchestrator | Thursday 16 April 2026 06:15:17 +0000 (0:00:00.294) 0:00:07.586 ******** 2026-04-16 06:15:18.003222 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:18.003233 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:18.003244 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:18.003254 | orchestrator | 2026-04-16 06:15:18.003265 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:18.003284 | orchestrator | Thursday 16 April 2026 06:15:17 +0000 (0:00:00.281) 0:00:07.868 ******** 2026-04-16 06:15:31.129269 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:31.129381 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:31.129395 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:31.129405 | orchestrator | 2026-04-16 06:15:31.129417 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:31.129428 | orchestrator | Thursday 16 April 2026 06:15:18 +0000 (0:00:00.301) 0:00:08.169 ******** 2026-04-16 06:15:31.129438 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.129559 | orchestrator | 2026-04-16 06:15:31.129574 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:31.129585 | orchestrator | Thursday 16 April 2026 06:15:18 +0000 (0:00:00.122) 0:00:08.292 ******** 2026-04-16 06:15:31.129596 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.129607 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.129617 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:31.129628 | orchestrator | 2026-04-16 06:15:31.129639 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:31.129649 | orchestrator | Thursday 16 April 2026 06:15:18 +0000 (0:00:00.257) 0:00:08.549 ******** 2026-04-16 06:15:31.129659 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:31.129670 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:31.129680 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:31.129769 | orchestrator | 2026-04-16 06:15:31.129779 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:31.129789 | orchestrator | Thursday 16 April 2026 06:15:19 +0000 (0:00:00.472) 0:00:09.022 ******** 2026-04-16 06:15:31.129798 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.129810 | orchestrator | 2026-04-16 06:15:31.129821 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:31.129832 | orchestrator | Thursday 16 April 2026 06:15:19 +0000 (0:00:00.133) 0:00:09.155 ******** 2026-04-16 06:15:31.129843 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.129854 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.129864 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:31.129875 | orchestrator | 2026-04-16 06:15:31.129885 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:31.129896 | orchestrator | Thursday 16 April 2026 06:15:19 +0000 (0:00:00.275) 0:00:09.431 ******** 2026-04-16 06:15:31.129907 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:31.129955 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:31.129968 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:31.129979 | orchestrator | 2026-04-16 06:15:31.129990 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:31.130001 | orchestrator | Thursday 16 April 2026 06:15:19 +0000 (0:00:00.301) 0:00:09.732 ******** 2026-04-16 06:15:31.130012 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.130074 | orchestrator | 2026-04-16 06:15:31.130086 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:31.130097 | orchestrator | Thursday 16 April 2026 06:15:19 +0000 (0:00:00.124) 0:00:09.856 ******** 2026-04-16 06:15:31.130109 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.130119 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.130129 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:31.130138 | orchestrator | 2026-04-16 06:15:31.130148 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 06:15:31.130157 | orchestrator | Thursday 16 April 2026 06:15:20 +0000 (0:00:00.454) 0:00:10.310 ******** 2026-04-16 06:15:31.130167 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:15:31.130177 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:15:31.130201 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:15:31.130211 | orchestrator | 2026-04-16 06:15:31.130220 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 06:15:31.130230 | orchestrator | Thursday 16 April 2026 06:15:20 +0000 (0:00:00.294) 0:00:10.605 ******** 2026-04-16 06:15:31.130239 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.130249 | orchestrator | 2026-04-16 06:15:31.130258 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 06:15:31.130268 | orchestrator | Thursday 16 April 2026 06:15:20 +0000 (0:00:00.130) 0:00:10.736 ******** 2026-04-16 06:15:31.130278 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.130287 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.130297 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:31.130306 | orchestrator | 2026-04-16 06:15:31.130326 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-16 06:15:31.130336 | orchestrator | Thursday 16 April 2026 06:15:21 +0000 (0:00:00.281) 0:00:11.017 ******** 2026-04-16 06:15:31.130346 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:15:31.130355 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:15:31.130365 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:15:31.130374 | orchestrator | 2026-04-16 06:15:31.130384 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-16 06:15:31.130393 | orchestrator | Thursday 16 April 2026 06:15:22 +0000 (0:00:01.784) 0:00:12.802 ******** 2026-04-16 06:15:31.130403 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-16 06:15:31.130414 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-16 06:15:31.130423 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-16 06:15:31.130432 | orchestrator | 2026-04-16 06:15:31.130442 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-16 06:15:31.130451 | orchestrator | Thursday 16 April 2026 06:15:24 +0000 (0:00:01.838) 0:00:14.641 ******** 2026-04-16 06:15:31.130461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-16 06:15:31.130471 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-16 06:15:31.130481 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-16 06:15:31.130490 | orchestrator | 2026-04-16 06:15:31.130500 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-16 06:15:31.130530 | orchestrator | Thursday 16 April 2026 06:15:26 +0000 (0:00:01.741) 0:00:16.383 ******** 2026-04-16 06:15:31.130540 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-16 06:15:31.130550 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-16 06:15:31.130559 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-16 06:15:31.130569 | orchestrator | 2026-04-16 06:15:31.130578 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-16 06:15:31.130588 | orchestrator | Thursday 16 April 2026 06:15:27 +0000 (0:00:01.438) 0:00:17.821 ******** 2026-04-16 06:15:31.130598 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.130607 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.130616 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:31.130626 | orchestrator | 2026-04-16 06:15:31.130636 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-16 06:15:31.130728 | orchestrator | Thursday 16 April 2026 06:15:28 +0000 (0:00:00.468) 0:00:18.290 ******** 2026-04-16 06:15:31.130739 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.130749 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.130758 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:31.130768 | orchestrator | 2026-04-16 06:15:31.130777 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 06:15:31.130787 | orchestrator | Thursday 16 April 2026 06:15:28 +0000 (0:00:00.277) 0:00:18.568 ******** 2026-04-16 06:15:31.130796 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:15:31.130806 | orchestrator | 2026-04-16 06:15:31.130815 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-16 06:15:31.130825 | orchestrator | Thursday 16 April 2026 06:15:29 +0000 (0:00:00.585) 0:00:19.154 ******** 2026-04-16 06:15:31.130848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:15:31.130883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:15:31.758065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:15:31.758177 | orchestrator | 2026-04-16 06:15:31.758194 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-16 06:15:31.758208 | orchestrator | Thursday 16 April 2026 06:15:31 +0000 (0:00:01.834) 0:00:20.988 ******** 2026-04-16 06:15:31.758241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 06:15:31.758277 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:31.758298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 06:15:31.758311 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:31.758337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 06:15:34.130218 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:34.130309 | orchestrator | 2026-04-16 06:15:34.130319 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-16 06:15:34.130328 | orchestrator | Thursday 16 April 2026 06:15:31 +0000 (0:00:00.636) 0:00:21.624 ******** 2026-04-16 06:15:34.130339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 06:15:34.130349 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:15:34.130372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 06:15:34.130399 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:15:34.130435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 06:15:34.130449 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:15:34.130455 | orchestrator | 2026-04-16 06:15:34.130462 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-04-16 06:15:34.130469 | orchestrator | Thursday 16 April 2026 06:15:32 +0000 (0:00:00.802) 0:00:22.427 ******** 2026-04-16 06:15:34.130492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:16:16.621169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:16:16.621391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 06:16:16.621422 | orchestrator | 2026-04-16 06:16:16.621441 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 06:16:16.621459 | orchestrator | Thursday 16 April 2026 06:15:34 +0000 (0:00:01.569) 0:00:23.996 ******** 2026-04-16 06:16:16.621475 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:16:16.621493 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:16:16.621508 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:16:16.621524 | orchestrator | 2026-04-16 06:16:16.621541 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 06:16:16.621558 | orchestrator | Thursday 16 April 2026 06:15:34 +0000 (0:00:00.297) 0:00:24.294 ******** 2026-04-16 06:16:16.621575 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:16:16.621592 | orchestrator | 2026-04-16 06:16:16.621608 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-04-16 06:16:16.621624 | orchestrator | Thursday 16 April 2026 06:15:34 +0000 (0:00:00.498) 0:00:24.792 ******** 2026-04-16 06:16:16.621642 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:16:16.621658 | orchestrator | 2026-04-16 06:16:16.621674 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-04-16 06:16:16.621736 | orchestrator | Thursday 16 April 2026 06:15:37 +0000 (0:00:02.105) 0:00:26.897 ******** 2026-04-16 06:16:16.621753 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:16:16.621768 | orchestrator | 2026-04-16 06:16:16.621784 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-16 06:16:16.621801 | orchestrator | Thursday 16 April 2026 06:15:39 +0000 (0:00:02.535) 0:00:29.433 ******** 2026-04-16 06:16:16.621818 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:16:16.621835 | orchestrator | 2026-04-16 06:16:16.621852 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-16 06:16:16.621869 | orchestrator | Thursday 16 April 2026 06:15:55 +0000 (0:00:16.116) 0:00:45.549 ******** 2026-04-16 06:16:16.621886 | orchestrator | 2026-04-16 06:16:16.621899 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-16 06:16:16.621908 | orchestrator | Thursday 16 April 2026 06:15:55 +0000 (0:00:00.070) 0:00:45.620 ******** 2026-04-16 06:16:16.621917 | orchestrator | 2026-04-16 06:16:16.621927 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-16 06:16:16.621936 | orchestrator | Thursday 16 April 2026 06:15:55 +0000 (0:00:00.065) 0:00:45.686 ******** 2026-04-16 06:16:16.621946 | orchestrator | 2026-04-16 06:16:16.621955 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-16 06:16:16.621965 | orchestrator | Thursday 16 April 2026 06:15:55 +0000 (0:00:00.071) 0:00:45.757 ******** 2026-04-16 06:16:16.621974 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:16:16.621984 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:16:16.621993 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:16:16.622003 | orchestrator | 2026-04-16 06:16:16.622012 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:16:16.622088 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 06:16:16.622109 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-16 06:16:16.622118 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-16 06:16:16.622128 | orchestrator | 2026-04-16 06:16:16.622137 | orchestrator | 2026-04-16 06:16:16.622147 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:16:16.622156 | orchestrator | Thursday 16 April 2026 06:16:16 +0000 (0:00:20.702) 0:01:06.459 ******** 2026-04-16 06:16:16.622166 | orchestrator | =============================================================================== 2026-04-16 06:16:16.622183 | orchestrator | horizon : Restart horizon container ------------------------------------ 20.70s 2026-04-16 06:16:16.622194 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.12s 2026-04-16 06:16:16.622203 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.54s 2026-04-16 06:16:16.622213 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2026-04-16 06:16:16.622222 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2026-04-16 06:16:16.622232 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.83s 2026-04-16 06:16:16.622241 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.78s 2026-04-16 06:16:16.622250 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.74s 2026-04-16 06:16:16.622259 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.57s 2026-04-16 06:16:16.622269 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.44s 2026-04-16 06:16:16.622278 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.03s 2026-04-16 06:16:16.622287 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.80s 2026-04-16 06:16:16.622305 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2026-04-16 06:16:16.622326 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-04-16 06:16:16.965140 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-04-16 06:16:16.965219 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.50s 2026-04-16 06:16:16.965227 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2026-04-16 06:16:16.965234 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-04-16 06:16:16.965240 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.47s 2026-04-16 06:16:16.965246 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.46s 2026-04-16 06:16:19.474502 | orchestrator | 2026-04-16 06:16:19 | INFO  | Task 4efe98d8-9ca2-43e9-bbe4-0852c049e83e (skyline) was prepared for execution. 2026-04-16 06:16:19.474603 | orchestrator | 2026-04-16 06:16:19 | INFO  | It takes a moment until task 4efe98d8-9ca2-43e9-bbe4-0852c049e83e (skyline) has been started and output is visible here. 2026-04-16 06:16:49.533622 | orchestrator | 2026-04-16 06:16:49.533907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:16:49.533936 | orchestrator | 2026-04-16 06:16:49.533949 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:16:49.533961 | orchestrator | Thursday 16 April 2026 06:16:23 +0000 (0:00:00.217) 0:00:00.217 ******** 2026-04-16 06:16:49.533972 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:16:49.533984 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:16:49.533995 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:16:49.534005 | orchestrator | 2026-04-16 06:16:49.534062 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:16:49.534075 | orchestrator | Thursday 16 April 2026 06:16:23 +0000 (0:00:00.239) 0:00:00.456 ******** 2026-04-16 06:16:49.534086 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-16 06:16:49.534097 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-16 06:16:49.534108 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-16 06:16:49.534120 | orchestrator | 2026-04-16 06:16:49.534130 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-16 06:16:49.534141 | orchestrator | 2026-04-16 06:16:49.534152 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-16 06:16:49.534164 | orchestrator | Thursday 16 April 2026 06:16:24 +0000 (0:00:00.327) 0:00:00.783 ******** 2026-04-16 06:16:49.534178 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:16:49.534191 | orchestrator | 2026-04-16 06:16:49.534204 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-04-16 06:16:49.534219 | orchestrator | Thursday 16 April 2026 06:16:24 +0000 (0:00:00.419) 0:00:01.203 ******** 2026-04-16 06:16:49.534239 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-04-16 06:16:49.534258 | orchestrator | 2026-04-16 06:16:49.534290 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-04-16 06:16:49.534310 | orchestrator | Thursday 16 April 2026 06:16:27 +0000 (0:00:03.365) 0:00:04.569 ******** 2026-04-16 06:16:49.534331 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-04-16 06:16:49.534352 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-04-16 06:16:49.534372 | orchestrator | 2026-04-16 06:16:49.534386 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-04-16 06:16:49.534401 | orchestrator | Thursday 16 April 2026 06:16:34 +0000 (0:00:06.279) 0:00:10.848 ******** 2026-04-16 06:16:49.534414 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:16:49.534467 | orchestrator | 2026-04-16 06:16:49.534488 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-04-16 06:16:49.534520 | orchestrator | Thursday 16 April 2026 06:16:37 +0000 (0:00:03.125) 0:00:13.974 ******** 2026-04-16 06:16:49.534538 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:16:49.534557 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-04-16 06:16:49.534575 | orchestrator | 2026-04-16 06:16:49.534612 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-04-16 06:16:49.534632 | orchestrator | Thursday 16 April 2026 06:16:41 +0000 (0:00:04.046) 0:00:18.021 ******** 2026-04-16 06:16:49.534645 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:16:49.534657 | orchestrator | 2026-04-16 06:16:49.534676 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-04-16 06:16:49.534719 | orchestrator | Thursday 16 April 2026 06:16:44 +0000 (0:00:03.231) 0:00:21.252 ******** 2026-04-16 06:16:49.534739 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-04-16 06:16:49.534757 | orchestrator | 2026-04-16 06:16:49.534774 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-16 06:16:49.534793 | orchestrator | Thursday 16 April 2026 06:16:48 +0000 (0:00:03.707) 0:00:24.959 ******** 2026-04-16 06:16:49.534816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:49.534872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:49.534895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:49.534928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:49.534941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:49.534963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319571 | orchestrator | 2026-04-16 06:16:53.319664 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-16 06:16:53.319679 | orchestrator | Thursday 16 April 2026 06:16:49 +0000 (0:00:01.226) 0:00:26.186 ******** 2026-04-16 06:16:53.319690 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:16:53.319753 | orchestrator | 2026-04-16 06:16:53.319763 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-16 06:16:53.319772 | orchestrator | Thursday 16 April 2026 06:16:50 +0000 (0:00:00.712) 0:00:26.899 ******** 2026-04-16 06:16:53.319784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:16:53.319932 | orchestrator | 2026-04-16 06:16:53.319947 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-16 06:16:53.319956 | orchestrator | Thursday 16 April 2026 06:16:52 +0000 (0:00:02.462) 0:00:29.362 ******** 2026-04-16 06:16:53.319965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 06:16:53.319974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 06:16:53.319984 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:16:53.320001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.603968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.604087 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:16:54.604161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.604811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.604837 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:16:54.604852 | orchestrator | 2026-04-16 06:16:54.604865 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-16 06:16:54.604878 | orchestrator | Thursday 16 April 2026 06:16:53 +0000 (0:00:00.620) 0:00:29.983 ******** 2026-04-16 06:16:54.604891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.604952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.604969 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:16:54.604992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.605005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.605017 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:16:54.605029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-04-16 06:16:54.605061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-04-16 06:17:02.629845 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:17:02.629964 | orchestrator | 2026-04-16 06:17:02.629981 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-16 06:17:02.629995 | orchestrator | Thursday 16 April 2026 06:16:54 +0000 (0:00:01.273) 0:00:31.257 ******** 2026-04-16 06:17:02.630084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630322 | orchestrator | 2026-04-16 06:17:02.630340 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-16 06:17:02.630353 | orchestrator | Thursday 16 April 2026 06:16:56 +0000 (0:00:02.346) 0:00:33.603 ******** 2026-04-16 06:17:02.630371 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-16 06:17:02.630392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-16 06:17:02.630412 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-16 06:17:02.630442 | orchestrator | 2026-04-16 06:17:02.630456 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-16 06:17:02.630470 | orchestrator | Thursday 16 April 2026 06:16:58 +0000 (0:00:01.496) 0:00:35.099 ******** 2026-04-16 06:17:02.630483 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-16 06:17:02.630496 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-16 06:17:02.630508 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-16 06:17:02.630521 | orchestrator | 2026-04-16 06:17:02.630533 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-16 06:17:02.630547 | orchestrator | Thursday 16 April 2026 06:17:00 +0000 (0:00:01.941) 0:00:37.041 ******** 2026-04-16 06:17:02.630569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:02.630610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.657690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.657860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.657920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.657934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.657946 | orchestrator | 2026-04-16 06:17:04.657959 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-16 06:17:04.657972 | orchestrator | Thursday 16 April 2026 06:17:02 +0000 (0:00:02.251) 0:00:39.292 ******** 2026-04-16 06:17:04.657983 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:17:04.657995 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:17:04.658112 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:17:04.658129 | orchestrator | 2026-04-16 06:17:04.658161 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-04-16 06:17:04.658173 | orchestrator | Thursday 16 April 2026 06:17:02 +0000 (0:00:00.293) 0:00:39.585 ******** 2026-04-16 06:17:04.658184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.658207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.658221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.658235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:04.658265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:34.347889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-04-16 06:17:34.348035 | orchestrator | 2026-04-16 06:17:34.348056 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-04-16 06:17:34.348070 | orchestrator | Thursday 16 April 2026 06:17:04 +0000 (0:00:01.733) 0:00:41.319 ******** 2026-04-16 06:17:34.348082 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:17:34.348095 | orchestrator | 2026-04-16 06:17:34.348106 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-04-16 06:17:34.348117 | orchestrator | Thursday 16 April 2026 06:17:06 +0000 (0:00:02.071) 0:00:43.391 ******** 2026-04-16 06:17:34.348128 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:17:34.348139 | orchestrator | 2026-04-16 06:17:34.348150 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-04-16 06:17:34.348162 | orchestrator | Thursday 16 April 2026 06:17:08 +0000 (0:00:02.264) 0:00:45.655 ******** 2026-04-16 06:17:34.348174 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:17:34.348185 | orchestrator | 2026-04-16 06:17:34.348196 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-16 06:17:34.348211 | orchestrator | Thursday 16 April 2026 06:17:16 +0000 (0:00:07.616) 0:00:53.271 ******** 2026-04-16 06:17:34.348230 | orchestrator | 2026-04-16 06:17:34.348248 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-16 06:17:34.348265 | orchestrator | Thursday 16 April 2026 06:17:16 +0000 (0:00:00.068) 0:00:53.340 ******** 2026-04-16 06:17:34.348283 | orchestrator | 2026-04-16 06:17:34.348301 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-16 06:17:34.348319 | orchestrator | Thursday 16 April 2026 06:17:16 +0000 (0:00:00.067) 0:00:53.407 ******** 2026-04-16 06:17:34.348338 | orchestrator | 2026-04-16 06:17:34.348357 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-16 06:17:34.348376 | orchestrator | Thursday 16 April 2026 06:17:16 +0000 (0:00:00.086) 0:00:53.494 ******** 2026-04-16 06:17:34.348394 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:17:34.348413 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:17:34.348432 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:17:34.348450 | orchestrator | 2026-04-16 06:17:34.348468 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-16 06:17:34.348487 | orchestrator | Thursday 16 April 2026 06:17:24 +0000 (0:00:07.868) 0:01:01.362 ******** 2026-04-16 06:17:34.348506 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:17:34.348524 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:17:34.348544 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:17:34.348630 | orchestrator | 2026-04-16 06:17:34.348651 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:17:34.348755 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 06:17:34.348778 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 06:17:34.348832 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 06:17:34.348851 | orchestrator | 2026-04-16 06:17:34.348889 | orchestrator | 2026-04-16 06:17:34.348910 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:17:34.348928 | orchestrator | Thursday 16 April 2026 06:17:34 +0000 (0:00:09.332) 0:01:10.694 ******** 2026-04-16 06:17:34.348966 | orchestrator | =============================================================================== 2026-04-16 06:17:34.348986 | orchestrator | skyline : Restart skyline-console container ----------------------------- 9.33s 2026-04-16 06:17:34.349004 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 7.87s 2026-04-16 06:17:34.349023 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.62s 2026-04-16 06:17:34.349040 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.28s 2026-04-16 06:17:34.349058 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.05s 2026-04-16 06:17:34.349076 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.71s 2026-04-16 06:17:34.349094 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.37s 2026-04-16 06:17:34.349112 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.23s 2026-04-16 06:17:34.349155 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.13s 2026-04-16 06:17:34.349172 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.46s 2026-04-16 06:17:34.349188 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.35s 2026-04-16 06:17:34.349205 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.26s 2026-04-16 06:17:34.349221 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.25s 2026-04-16 06:17:34.349238 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.07s 2026-04-16 06:17:34.349254 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 1.94s 2026-04-16 06:17:34.349270 | orchestrator | skyline : Check skyline container --------------------------------------- 1.73s 2026-04-16 06:17:34.349286 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.50s 2026-04-16 06:17:34.349303 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.27s 2026-04-16 06:17:34.349320 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.23s 2026-04-16 06:17:34.349337 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.71s 2026-04-16 06:17:36.637409 | orchestrator | 2026-04-16 06:17:36 | INFO  | Task 7c4b912e-a8e6-48ff-a700-3168711441e4 (glance) was prepared for execution. 2026-04-16 06:17:36.637509 | orchestrator | 2026-04-16 06:17:36 | INFO  | It takes a moment until task 7c4b912e-a8e6-48ff-a700-3168711441e4 (glance) has been started and output is visible here. 2026-04-16 06:18:09.028830 | orchestrator | 2026-04-16 06:18:09.028975 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:18:09.029007 | orchestrator | 2026-04-16 06:18:09.029026 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:18:09.029046 | orchestrator | Thursday 16 April 2026 06:17:40 +0000 (0:00:00.189) 0:00:00.189 ******** 2026-04-16 06:18:09.029066 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:18:09.029088 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:18:09.029107 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:18:09.029125 | orchestrator | 2026-04-16 06:18:09.029144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:18:09.029162 | orchestrator | Thursday 16 April 2026 06:17:40 +0000 (0:00:00.267) 0:00:00.457 ******** 2026-04-16 06:18:09.029181 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-16 06:18:09.029200 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-16 06:18:09.029252 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-16 06:18:09.029272 | orchestrator | 2026-04-16 06:18:09.029290 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-16 06:18:09.029309 | orchestrator | 2026-04-16 06:18:09.029329 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 06:18:09.029349 | orchestrator | Thursday 16 April 2026 06:17:40 +0000 (0:00:00.373) 0:00:00.831 ******** 2026-04-16 06:18:09.029368 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:18:09.029387 | orchestrator | 2026-04-16 06:18:09.029405 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-04-16 06:18:09.029426 | orchestrator | Thursday 16 April 2026 06:17:41 +0000 (0:00:00.489) 0:00:01.320 ******** 2026-04-16 06:18:09.029443 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-04-16 06:18:09.029463 | orchestrator | 2026-04-16 06:18:09.029482 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-04-16 06:18:09.029502 | orchestrator | Thursday 16 April 2026 06:17:44 +0000 (0:00:03.254) 0:00:04.575 ******** 2026-04-16 06:18:09.029522 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-04-16 06:18:09.029544 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-04-16 06:18:09.029633 | orchestrator | 2026-04-16 06:18:09.029654 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-04-16 06:18:09.029674 | orchestrator | Thursday 16 April 2026 06:17:51 +0000 (0:00:06.421) 0:00:10.996 ******** 2026-04-16 06:18:09.029693 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:18:09.029810 | orchestrator | 2026-04-16 06:18:09.029832 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-04-16 06:18:09.029851 | orchestrator | Thursday 16 April 2026 06:17:54 +0000 (0:00:03.204) 0:00:14.201 ******** 2026-04-16 06:18:09.029870 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:18:09.029909 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-04-16 06:18:09.029929 | orchestrator | 2026-04-16 06:18:09.029947 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-04-16 06:18:09.029967 | orchestrator | Thursday 16 April 2026 06:17:58 +0000 (0:00:03.993) 0:00:18.194 ******** 2026-04-16 06:18:09.029985 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:18:09.030004 | orchestrator | 2026-04-16 06:18:09.030093 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-04-16 06:18:09.030112 | orchestrator | Thursday 16 April 2026 06:18:01 +0000 (0:00:03.191) 0:00:21.386 ******** 2026-04-16 06:18:09.030127 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-04-16 06:18:09.030143 | orchestrator | 2026-04-16 06:18:09.030171 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-16 06:18:09.030186 | orchestrator | Thursday 16 April 2026 06:18:05 +0000 (0:00:03.898) 0:00:25.284 ******** 2026-04-16 06:18:09.030246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:09.030290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:09.030318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:09.030345 | orchestrator | 2026-04-16 06:18:09.030361 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 06:18:09.030377 | orchestrator | Thursday 16 April 2026 06:18:08 +0000 (0:00:03.094) 0:00:28.379 ******** 2026-04-16 06:18:09.030393 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:18:09.030412 | orchestrator | 2026-04-16 06:18:09.030435 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-16 06:18:22.978677 | orchestrator | Thursday 16 April 2026 06:18:09 +0000 (0:00:00.611) 0:00:28.990 ******** 2026-04-16 06:18:22.978861 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:18:22.978880 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:18:22.978892 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:18:22.978904 | orchestrator | 2026-04-16 06:18:22.978917 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-16 06:18:22.978928 | orchestrator | Thursday 16 April 2026 06:18:11 +0000 (0:00:02.960) 0:00:31.951 ******** 2026-04-16 06:18:22.978941 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:18:22.978953 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:18:22.978964 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:18:22.978975 | orchestrator | 2026-04-16 06:18:22.978987 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-16 06:18:22.978998 | orchestrator | Thursday 16 April 2026 06:18:13 +0000 (0:00:01.379) 0:00:33.330 ******** 2026-04-16 06:18:22.979009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:18:22.979020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:18:22.979030 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:18:22.979041 | orchestrator | 2026-04-16 06:18:22.979052 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-16 06:18:22.979063 | orchestrator | Thursday 16 April 2026 06:18:14 +0000 (0:00:01.327) 0:00:34.658 ******** 2026-04-16 06:18:22.979074 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:18:22.979086 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:18:22.979097 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:18:22.979108 | orchestrator | 2026-04-16 06:18:22.979119 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-16 06:18:22.979131 | orchestrator | Thursday 16 April 2026 06:18:15 +0000 (0:00:00.652) 0:00:35.310 ******** 2026-04-16 06:18:22.979141 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:22.979152 | orchestrator | 2026-04-16 06:18:22.979163 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-16 06:18:22.979174 | orchestrator | Thursday 16 April 2026 06:18:15 +0000 (0:00:00.132) 0:00:35.443 ******** 2026-04-16 06:18:22.979185 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:22.979196 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:22.979209 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:22.979222 | orchestrator | 2026-04-16 06:18:22.979235 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 06:18:22.979263 | orchestrator | Thursday 16 April 2026 06:18:15 +0000 (0:00:00.286) 0:00:35.730 ******** 2026-04-16 06:18:22.979276 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:18:22.979289 | orchestrator | 2026-04-16 06:18:22.979301 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-16 06:18:22.979332 | orchestrator | Thursday 16 April 2026 06:18:16 +0000 (0:00:00.756) 0:00:36.486 ******** 2026-04-16 06:18:22.979353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:22.979391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:22.979415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:22.979437 | orchestrator | 2026-04-16 06:18:22.979451 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-16 06:18:22.979463 | orchestrator | Thursday 16 April 2026 06:18:20 +0000 (0:00:03.568) 0:00:40.055 ******** 2026-04-16 06:18:22.979488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 06:18:26.040386 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:26.040516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 06:18:26.040558 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:26.040573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 06:18:26.040586 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:26.040598 | orchestrator | 2026-04-16 06:18:26.040610 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-16 06:18:26.040622 | orchestrator | Thursday 16 April 2026 06:18:22 +0000 (0:00:02.885) 0:00:42.940 ******** 2026-04-16 06:18:26.040659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 06:18:26.040681 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:26.040694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 06:18:26.040769 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:26.040793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 06:18:55.692358 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.692458 | orchestrator | 2026-04-16 06:18:55.692470 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-16 06:18:55.692480 | orchestrator | Thursday 16 April 2026 06:18:26 +0000 (0:00:03.056) 0:00:45.997 ******** 2026-04-16 06:18:55.692489 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.692497 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.692519 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.692527 | orchestrator | 2026-04-16 06:18:55.692535 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-16 06:18:55.692543 | orchestrator | Thursday 16 April 2026 06:18:28 +0000 (0:00:02.637) 0:00:48.635 ******** 2026-04-16 06:18:55.692556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:55.692569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:55.692619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:18:55.692630 | orchestrator | 2026-04-16 06:18:55.692639 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-16 06:18:55.692647 | orchestrator | Thursday 16 April 2026 06:18:31 +0000 (0:00:03.328) 0:00:51.963 ******** 2026-04-16 06:18:55.692655 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:18:55.692663 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:18:55.692671 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:18:55.692685 | orchestrator | 2026-04-16 06:18:55.692702 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-16 06:18:55.692776 | orchestrator | Thursday 16 April 2026 06:18:36 +0000 (0:00:04.819) 0:00:56.783 ******** 2026-04-16 06:18:55.692789 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.692802 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.692814 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.692826 | orchestrator | 2026-04-16 06:18:55.692838 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-04-16 06:18:55.692851 | orchestrator | Thursday 16 April 2026 06:18:40 +0000 (0:00:03.276) 0:01:00.060 ******** 2026-04-16 06:18:55.692863 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.692875 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.692888 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.692902 | orchestrator | 2026-04-16 06:18:55.692915 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-16 06:18:55.692929 | orchestrator | Thursday 16 April 2026 06:18:43 +0000 (0:00:03.110) 0:01:03.170 ******** 2026-04-16 06:18:55.692942 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.692954 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.692967 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.692978 | orchestrator | 2026-04-16 06:18:55.692991 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-16 06:18:55.693004 | orchestrator | Thursday 16 April 2026 06:18:46 +0000 (0:00:03.008) 0:01:06.178 ******** 2026-04-16 06:18:55.693028 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.693042 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.693056 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.693068 | orchestrator | 2026-04-16 06:18:55.693081 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-16 06:18:55.693093 | orchestrator | Thursday 16 April 2026 06:18:49 +0000 (0:00:02.847) 0:01:09.026 ******** 2026-04-16 06:18:55.693105 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.693118 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.693130 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.693143 | orchestrator | 2026-04-16 06:18:55.693155 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-16 06:18:55.693167 | orchestrator | Thursday 16 April 2026 06:18:49 +0000 (0:00:00.398) 0:01:09.424 ******** 2026-04-16 06:18:55.693181 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-16 06:18:55.693196 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:18:55.693209 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-16 06:18:55.693223 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:18:55.693236 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-16 06:18:55.693250 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:18:55.693263 | orchestrator | 2026-04-16 06:18:55.693276 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-16 06:18:55.693290 | orchestrator | Thursday 16 April 2026 06:18:52 +0000 (0:00:02.619) 0:01:12.044 ******** 2026-04-16 06:18:55.693304 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:18:55.693317 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:18:55.693331 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:18:55.693345 | orchestrator | 2026-04-16 06:18:55.693360 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-04-16 06:18:55.693387 | orchestrator | Thursday 16 April 2026 06:18:55 +0000 (0:00:03.607) 0:01:15.651 ******** 2026-04-16 06:20:01.225470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:20:01.225588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:20:01.225657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 06:20:01.225674 | orchestrator | 2026-04-16 06:20:01.225688 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 06:20:01.225701 | orchestrator | Thursday 16 April 2026 06:18:58 +0000 (0:00:03.194) 0:01:18.846 ******** 2026-04-16 06:20:01.225814 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:20:01.225828 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:20:01.225839 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:20:01.225850 | orchestrator | 2026-04-16 06:20:01.225862 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-04-16 06:20:01.225873 | orchestrator | Thursday 16 April 2026 06:18:59 +0000 (0:00:00.504) 0:01:19.350 ******** 2026-04-16 06:20:01.225884 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:20:01.225896 | orchestrator | 2026-04-16 06:20:01.225907 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-04-16 06:20:01.225928 | orchestrator | Thursday 16 April 2026 06:19:01 +0000 (0:00:02.011) 0:01:21.362 ******** 2026-04-16 06:20:01.225938 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:20:01.225949 | orchestrator | 2026-04-16 06:20:01.225960 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-16 06:20:01.225971 | orchestrator | Thursday 16 April 2026 06:19:03 +0000 (0:00:02.170) 0:01:23.532 ******** 2026-04-16 06:20:01.225984 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:20:01.225998 | orchestrator | 2026-04-16 06:20:01.226084 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-04-16 06:20:01.226117 | orchestrator | Thursday 16 April 2026 06:19:05 +0000 (0:00:02.017) 0:01:25.550 ******** 2026-04-16 06:20:01.226130 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:20:01.226143 | orchestrator | 2026-04-16 06:20:01.226156 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-16 06:20:01.226169 | orchestrator | Thursday 16 April 2026 06:19:32 +0000 (0:00:26.749) 0:01:52.299 ******** 2026-04-16 06:20:01.226182 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:20:01.226194 | orchestrator | 2026-04-16 06:20:01.226207 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-16 06:20:01.226220 | orchestrator | Thursday 16 April 2026 06:19:34 +0000 (0:00:02.058) 0:01:54.357 ******** 2026-04-16 06:20:01.226233 | orchestrator | 2026-04-16 06:20:01.226245 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-16 06:20:01.226258 | orchestrator | Thursday 16 April 2026 06:19:34 +0000 (0:00:00.069) 0:01:54.427 ******** 2026-04-16 06:20:01.226270 | orchestrator | 2026-04-16 06:20:01.226283 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-16 06:20:01.226296 | orchestrator | Thursday 16 April 2026 06:19:34 +0000 (0:00:00.067) 0:01:54.495 ******** 2026-04-16 06:20:01.226308 | orchestrator | 2026-04-16 06:20:01.226323 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-16 06:20:01.226343 | orchestrator | Thursday 16 April 2026 06:19:34 +0000 (0:00:00.067) 0:01:54.563 ******** 2026-04-16 06:20:01.226361 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:20:01.226378 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:20:01.226396 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:20:01.226415 | orchestrator | 2026-04-16 06:20:01.226430 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:20:01.226442 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 06:20:01.226454 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-16 06:20:01.226465 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-16 06:20:01.226476 | orchestrator | 2026-04-16 06:20:01.226487 | orchestrator | 2026-04-16 06:20:01.226498 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:20:01.226509 | orchestrator | Thursday 16 April 2026 06:20:01 +0000 (0:00:26.609) 0:02:21.173 ******** 2026-04-16 06:20:01.226520 | orchestrator | =============================================================================== 2026-04-16 06:20:01.226531 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.75s 2026-04-16 06:20:01.226541 | orchestrator | glance : Restart glance-api container ---------------------------------- 26.61s 2026-04-16 06:20:01.226552 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.42s 2026-04-16 06:20:01.226574 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 4.82s 2026-04-16 06:20:01.518424 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.99s 2026-04-16 06:20:01.518532 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.90s 2026-04-16 06:20:01.518560 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 3.61s 2026-04-16 06:20:01.518567 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.57s 2026-04-16 06:20:01.518575 | orchestrator | glance : Copying over config.json files for services -------------------- 3.33s 2026-04-16 06:20:01.518582 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.28s 2026-04-16 06:20:01.518589 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.25s 2026-04-16 06:20:01.518596 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.20s 2026-04-16 06:20:01.518605 | orchestrator | glance : Check glance containers ---------------------------------------- 3.19s 2026-04-16 06:20:01.518614 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.19s 2026-04-16 06:20:01.518622 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.11s 2026-04-16 06:20:01.518631 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.09s 2026-04-16 06:20:01.518639 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.06s 2026-04-16 06:20:01.518648 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.01s 2026-04-16 06:20:01.518656 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 2.96s 2026-04-16 06:20:01.518665 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 2.89s 2026-04-16 06:20:03.880128 | orchestrator | 2026-04-16 06:20:03 | INFO  | Task 2bf165b5-3546-40d2-a1f4-b9137d0fe7b6 (cinder) was prepared for execution. 2026-04-16 06:20:03.880229 | orchestrator | 2026-04-16 06:20:03 | INFO  | It takes a moment until task 2bf165b5-3546-40d2-a1f4-b9137d0fe7b6 (cinder) has been started and output is visible here. 2026-04-16 06:20:38.034382 | orchestrator | 2026-04-16 06:20:38.034528 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:20:38.034552 | orchestrator | 2026-04-16 06:20:38.034564 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:20:38.034577 | orchestrator | Thursday 16 April 2026 06:20:07 +0000 (0:00:00.244) 0:00:00.244 ******** 2026-04-16 06:20:38.034589 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:20:38.034601 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:20:38.034613 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:20:38.034631 | orchestrator | 2026-04-16 06:20:38.034741 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:20:38.034761 | orchestrator | Thursday 16 April 2026 06:20:08 +0000 (0:00:00.304) 0:00:00.548 ******** 2026-04-16 06:20:38.034779 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-16 06:20:38.034799 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-16 06:20:38.034817 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-16 06:20:38.034836 | orchestrator | 2026-04-16 06:20:38.034853 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-16 06:20:38.034872 | orchestrator | 2026-04-16 06:20:38.034892 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 06:20:38.034910 | orchestrator | Thursday 16 April 2026 06:20:08 +0000 (0:00:00.412) 0:00:00.960 ******** 2026-04-16 06:20:38.034930 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:20:38.034950 | orchestrator | 2026-04-16 06:20:38.034970 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-04-16 06:20:38.034984 | orchestrator | Thursday 16 April 2026 06:20:09 +0000 (0:00:00.518) 0:00:01.478 ******** 2026-04-16 06:20:38.034996 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-04-16 06:20:38.035007 | orchestrator | 2026-04-16 06:20:38.035018 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-04-16 06:20:38.035057 | orchestrator | Thursday 16 April 2026 06:20:12 +0000 (0:00:03.506) 0:00:04.985 ******** 2026-04-16 06:20:38.035069 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-04-16 06:20:38.035081 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-04-16 06:20:38.035092 | orchestrator | 2026-04-16 06:20:38.035103 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-04-16 06:20:38.035113 | orchestrator | Thursday 16 April 2026 06:20:18 +0000 (0:00:06.305) 0:00:11.291 ******** 2026-04-16 06:20:38.035124 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:20:38.035135 | orchestrator | 2026-04-16 06:20:38.035146 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-04-16 06:20:38.035157 | orchestrator | Thursday 16 April 2026 06:20:22 +0000 (0:00:03.127) 0:00:14.418 ******** 2026-04-16 06:20:38.035167 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:20:38.035178 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-04-16 06:20:38.035189 | orchestrator | 2026-04-16 06:20:38.035200 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-04-16 06:20:38.035211 | orchestrator | Thursday 16 April 2026 06:20:25 +0000 (0:00:03.877) 0:00:18.295 ******** 2026-04-16 06:20:38.035221 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:20:38.035232 | orchestrator | 2026-04-16 06:20:38.035243 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-04-16 06:20:38.035254 | orchestrator | Thursday 16 April 2026 06:20:29 +0000 (0:00:03.097) 0:00:21.393 ******** 2026-04-16 06:20:38.035280 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-04-16 06:20:38.035290 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-04-16 06:20:38.035301 | orchestrator | 2026-04-16 06:20:38.035312 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-16 06:20:38.035323 | orchestrator | Thursday 16 April 2026 06:20:36 +0000 (0:00:07.016) 0:00:28.409 ******** 2026-04-16 06:20:38.035337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:38.035374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:38.035387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:38.035409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:38.035427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:38.035439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:38.035451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:38.035471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:43.483641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:43.483805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:43.483841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:43.483855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:43.483867 | orchestrator | 2026-04-16 06:20:43.483881 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 06:20:43.483893 | orchestrator | Thursday 16 April 2026 06:20:38 +0000 (0:00:02.066) 0:00:30.476 ******** 2026-04-16 06:20:43.483905 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:20:43.483917 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:20:43.483929 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:20:43.483940 | orchestrator | 2026-04-16 06:20:43.483951 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 06:20:43.483962 | orchestrator | Thursday 16 April 2026 06:20:38 +0000 (0:00:00.465) 0:00:30.942 ******** 2026-04-16 06:20:43.483974 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:20:43.483985 | orchestrator | 2026-04-16 06:20:43.484017 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-16 06:20:43.484029 | orchestrator | Thursday 16 April 2026 06:20:39 +0000 (0:00:00.499) 0:00:31.441 ******** 2026-04-16 06:20:43.484040 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-04-16 06:20:43.484052 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-04-16 06:20:43.484062 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-04-16 06:20:43.484073 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-04-16 06:20:43.484084 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-04-16 06:20:43.484095 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-04-16 06:20:43.484105 | orchestrator | 2026-04-16 06:20:43.484122 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-16 06:20:43.484141 | orchestrator | Thursday 16 April 2026 06:20:40 +0000 (0:00:01.532) 0:00:32.974 ******** 2026-04-16 06:20:43.484185 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-16 06:20:43.484225 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-16 06:20:43.484256 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-16 06:20:43.484277 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-16 06:20:43.484324 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-16 06:20:53.748776 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-04-16 06:20:53.748897 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-16 06:20:53.748933 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-16 06:20:53.748947 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-16 06:20:53.748980 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-16 06:20:53.749014 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-16 06:20:53.749026 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-04-16 06:20:53.749039 | orchestrator | 2026-04-16 06:20:53.749051 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-16 06:20:53.749064 | orchestrator | Thursday 16 April 2026 06:20:43 +0000 (0:00:03.112) 0:00:36.087 ******** 2026-04-16 06:20:53.749075 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:20:53.749087 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:20:53.749098 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-04-16 06:20:53.749108 | orchestrator | 2026-04-16 06:20:53.749119 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-16 06:20:53.749136 | orchestrator | Thursday 16 April 2026 06:20:45 +0000 (0:00:01.479) 0:00:37.566 ******** 2026-04-16 06:20:53.749148 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-04-16 06:20:53.749160 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-04-16 06:20:53.749171 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-04-16 06:20:53.749181 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 06:20:53.749192 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 06:20:53.749213 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-04-16 06:20:53.749224 | orchestrator | 2026-04-16 06:20:53.749237 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-16 06:20:53.749250 | orchestrator | Thursday 16 April 2026 06:20:47 +0000 (0:00:02.615) 0:00:40.181 ******** 2026-04-16 06:20:53.749264 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-16 06:20:53.749276 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-16 06:20:53.749289 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-16 06:20:53.749301 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-16 06:20:53.749314 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-16 06:20:53.749326 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-16 06:20:53.749338 | orchestrator | 2026-04-16 06:20:53.749351 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-16 06:20:53.749364 | orchestrator | Thursday 16 April 2026 06:20:48 +0000 (0:00:00.993) 0:00:41.175 ******** 2026-04-16 06:20:53.749376 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:20:53.749389 | orchestrator | 2026-04-16 06:20:53.749402 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-16 06:20:53.749415 | orchestrator | Thursday 16 April 2026 06:20:48 +0000 (0:00:00.136) 0:00:41.312 ******** 2026-04-16 06:20:53.749427 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:20:53.749438 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:20:53.749449 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:20:53.749460 | orchestrator | 2026-04-16 06:20:53.749471 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 06:20:53.749481 | orchestrator | Thursday 16 April 2026 06:20:49 +0000 (0:00:00.464) 0:00:41.776 ******** 2026-04-16 06:20:53.749493 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:20:53.749504 | orchestrator | 2026-04-16 06:20:53.749515 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-16 06:20:53.749526 | orchestrator | Thursday 16 April 2026 06:20:49 +0000 (0:00:00.526) 0:00:42.303 ******** 2026-04-16 06:20:53.749546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:54.622931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:54.623071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:54.623089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:54.623228 | orchestrator | 2026-04-16 06:20:54.623241 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-16 06:20:54.623253 | orchestrator | Thursday 16 April 2026 06:20:53 +0000 (0:00:03.899) 0:00:46.202 ******** 2026-04-16 06:20:54.623273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:20:54.720435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720570 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:20:54.720584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:20:54.720597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720680 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:20:54.720692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:20:54.720703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:20:54.720803 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:20:54.720815 | orchestrator | 2026-04-16 06:20:54.720827 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-16 06:20:54.720846 | orchestrator | Thursday 16 April 2026 06:20:54 +0000 (0:00:00.881) 0:00:47.083 ******** 2026-04-16 06:20:55.252703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:20:55.252895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.252912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.252925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.252937 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:20:55.252951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:20:55.253009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.253031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.253043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.253054 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:20:55.253066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:20:55.253078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:20:55.253106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:20:59.609397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:20:59.609511 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:20:59.609530 | orchestrator | 2026-04-16 06:20:59.609543 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-16 06:20:59.609556 | orchestrator | Thursday 16 April 2026 06:20:55 +0000 (0:00:00.820) 0:00:47.904 ******** 2026-04-16 06:20:59.609569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:59.609584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:59.609618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:20:59.609650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:59.609669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:59.609681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:59.609693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:59.609704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:59.609795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:20:59.609816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685443 | orchestrator | 2026-04-16 06:21:11.685454 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-16 06:21:11.685464 | orchestrator | Thursday 16 April 2026 06:20:59 +0000 (0:00:04.151) 0:00:52.056 ******** 2026-04-16 06:21:11.685473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-16 06:21:11.685483 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-16 06:21:11.685492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-04-16 06:21:11.685501 | orchestrator | 2026-04-16 06:21:11.685509 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-16 06:21:11.685538 | orchestrator | Thursday 16 April 2026 06:21:01 +0000 (0:00:01.766) 0:00:53.822 ******** 2026-04-16 06:21:11.685549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:21:11.685560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:21:11.685591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:21:11.685602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:11.685668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:14.104107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:14.104212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:14.104253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:14.104267 | orchestrator | 2026-04-16 06:21:14.104280 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-16 06:21:14.104292 | orchestrator | Thursday 16 April 2026 06:21:11 +0000 (0:00:10.318) 0:01:04.140 ******** 2026-04-16 06:21:14.104304 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:21:14.104316 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:21:14.104327 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:21:14.104338 | orchestrator | 2026-04-16 06:21:14.104349 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-16 06:21:14.104359 | orchestrator | Thursday 16 April 2026 06:21:13 +0000 (0:00:01.471) 0:01:05.611 ******** 2026-04-16 06:21:14.104372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:21:14.104400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:21:14.104431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:21:14.104452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:21:14.104464 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:21:14.104475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:21:14.104487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:21:14.104499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:21:14.104524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:21:17.552164 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:21:17.552276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-04-16 06:21:17.552317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:21:17.552331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 06:21:17.552344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 06:21:17.552365 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:21:17.552383 | orchestrator | 2026-04-16 06:21:17.552403 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-16 06:21:17.552423 | orchestrator | Thursday 16 April 2026 06:21:14 +0000 (0:00:00.961) 0:01:06.573 ******** 2026-04-16 06:21:17.552442 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:21:17.552460 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:21:17.552479 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:21:17.552497 | orchestrator | 2026-04-16 06:21:17.552514 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-04-16 06:21:17.552550 | orchestrator | Thursday 16 April 2026 06:21:14 +0000 (0:00:00.535) 0:01:07.108 ******** 2026-04-16 06:21:17.552597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:21:17.552633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:21:17.552653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-04-16 06:21:17.552674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:17.552696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:17.552768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:21:17.552805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:22:55.828393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:22:55.828504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 06:22:55.828519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:22:55.828530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:22:55.828576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 06:22:55.828588 | orchestrator | 2026-04-16 06:22:55.828600 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 06:22:55.828611 | orchestrator | Thursday 16 April 2026 06:21:17 +0000 (0:00:02.896) 0:01:10.004 ******** 2026-04-16 06:22:55.828621 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:22:55.828632 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:22:55.828642 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:22:55.828651 | orchestrator | 2026-04-16 06:22:55.828662 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-04-16 06:22:55.828672 | orchestrator | Thursday 16 April 2026 06:21:17 +0000 (0:00:00.289) 0:01:10.294 ******** 2026-04-16 06:22:55.828681 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.828691 | orchestrator | 2026-04-16 06:22:55.828761 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-04-16 06:22:55.828774 | orchestrator | Thursday 16 April 2026 06:21:20 +0000 (0:00:02.091) 0:01:12.386 ******** 2026-04-16 06:22:55.828783 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.828793 | orchestrator | 2026-04-16 06:22:55.828803 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-16 06:22:55.828812 | orchestrator | Thursday 16 April 2026 06:21:22 +0000 (0:00:02.228) 0:01:14.615 ******** 2026-04-16 06:22:55.828822 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.828832 | orchestrator | 2026-04-16 06:22:55.828842 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-16 06:22:55.828851 | orchestrator | Thursday 16 April 2026 06:21:41 +0000 (0:00:19.539) 0:01:34.154 ******** 2026-04-16 06:22:55.828861 | orchestrator | 2026-04-16 06:22:55.828871 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-16 06:22:55.828880 | orchestrator | Thursday 16 April 2026 06:21:41 +0000 (0:00:00.112) 0:01:34.267 ******** 2026-04-16 06:22:55.828890 | orchestrator | 2026-04-16 06:22:55.828899 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-16 06:22:55.828909 | orchestrator | Thursday 16 April 2026 06:21:41 +0000 (0:00:00.075) 0:01:34.342 ******** 2026-04-16 06:22:55.828919 | orchestrator | 2026-04-16 06:22:55.828928 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-16 06:22:55.828940 | orchestrator | Thursday 16 April 2026 06:21:42 +0000 (0:00:00.082) 0:01:34.424 ******** 2026-04-16 06:22:55.828951 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.828962 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:22:55.828973 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:22:55.828985 | orchestrator | 2026-04-16 06:22:55.828997 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-16 06:22:55.829009 | orchestrator | Thursday 16 April 2026 06:22:11 +0000 (0:00:29.794) 0:02:04.219 ******** 2026-04-16 06:22:55.829019 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.829028 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:22:55.829038 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:22:55.829048 | orchestrator | 2026-04-16 06:22:55.829057 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-16 06:22:55.829067 | orchestrator | Thursday 16 April 2026 06:22:22 +0000 (0:00:10.202) 0:02:14.421 ******** 2026-04-16 06:22:55.829077 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.829094 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:22:55.829104 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:22:55.829114 | orchestrator | 2026-04-16 06:22:55.829124 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-16 06:22:55.829133 | orchestrator | Thursday 16 April 2026 06:22:47 +0000 (0:00:25.151) 0:02:39.572 ******** 2026-04-16 06:22:55.829143 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:22:55.829153 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:22:55.829162 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:22:55.829172 | orchestrator | 2026-04-16 06:22:55.829182 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-16 06:22:55.829192 | orchestrator | Thursday 16 April 2026 06:22:55 +0000 (0:00:08.335) 0:02:47.908 ******** 2026-04-16 06:22:55.829202 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:22:55.829211 | orchestrator | 2026-04-16 06:22:55.829221 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:22:55.829232 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 06:22:55.829244 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:22:55.829254 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:22:55.829264 | orchestrator | 2026-04-16 06:22:55.829273 | orchestrator | 2026-04-16 06:22:55.829283 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:22:55.829293 | orchestrator | Thursday 16 April 2026 06:22:55 +0000 (0:00:00.268) 0:02:48.176 ******** 2026-04-16 06:22:55.829307 | orchestrator | =============================================================================== 2026-04-16 06:22:55.829317 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 29.79s 2026-04-16 06:22:55.829327 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 25.15s 2026-04-16 06:22:55.829336 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.54s 2026-04-16 06:22:55.829346 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.32s 2026-04-16 06:22:55.829356 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.20s 2026-04-16 06:22:55.829365 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.34s 2026-04-16 06:22:55.829375 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.02s 2026-04-16 06:22:55.829385 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.31s 2026-04-16 06:22:55.829394 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.15s 2026-04-16 06:22:55.829404 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.90s 2026-04-16 06:22:55.829413 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.88s 2026-04-16 06:22:55.829423 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.51s 2026-04-16 06:22:55.829432 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.13s 2026-04-16 06:22:55.829442 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.11s 2026-04-16 06:22:55.829458 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.10s 2026-04-16 06:22:56.167237 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.90s 2026-04-16 06:22:56.167394 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.62s 2026-04-16 06:22:56.167422 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.23s 2026-04-16 06:22:56.167444 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.09s 2026-04-16 06:22:56.167465 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.07s 2026-04-16 06:22:58.451498 | orchestrator | 2026-04-16 06:22:58 | INFO  | Task 7e7663c5-ee5c-4ecb-ab84-0d547c8c98a6 (barbican) was prepared for execution. 2026-04-16 06:22:58.451599 | orchestrator | 2026-04-16 06:22:58 | INFO  | It takes a moment until task 7e7663c5-ee5c-4ecb-ab84-0d547c8c98a6 (barbican) has been started and output is visible here. 2026-04-16 06:23:41.179575 | orchestrator | 2026-04-16 06:23:41.179692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:23:41.179709 | orchestrator | 2026-04-16 06:23:41.179763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:23:41.179775 | orchestrator | Thursday 16 April 2026 06:23:01 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-16 06:23:41.179786 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:23:41.179797 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:23:41.179809 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:23:41.179825 | orchestrator | 2026-04-16 06:23:41.179840 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:23:41.179850 | orchestrator | Thursday 16 April 2026 06:23:02 +0000 (0:00:00.239) 0:00:00.427 ******** 2026-04-16 06:23:41.179860 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-16 06:23:41.179870 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-16 06:23:41.179880 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-16 06:23:41.179890 | orchestrator | 2026-04-16 06:23:41.179900 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-16 06:23:41.179909 | orchestrator | 2026-04-16 06:23:41.179919 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-16 06:23:41.179928 | orchestrator | Thursday 16 April 2026 06:23:02 +0000 (0:00:00.310) 0:00:00.737 ******** 2026-04-16 06:23:41.179939 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:23:41.179949 | orchestrator | 2026-04-16 06:23:41.179959 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-04-16 06:23:41.179968 | orchestrator | Thursday 16 April 2026 06:23:02 +0000 (0:00:00.390) 0:00:01.128 ******** 2026-04-16 06:23:41.179979 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-04-16 06:23:41.179988 | orchestrator | 2026-04-16 06:23:41.179998 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-04-16 06:23:41.180007 | orchestrator | Thursday 16 April 2026 06:23:06 +0000 (0:00:03.432) 0:00:04.561 ******** 2026-04-16 06:23:41.180017 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-04-16 06:23:41.180027 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-04-16 06:23:41.180037 | orchestrator | 2026-04-16 06:23:41.180046 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-04-16 06:23:41.180056 | orchestrator | Thursday 16 April 2026 06:23:12 +0000 (0:00:06.456) 0:00:11.017 ******** 2026-04-16 06:23:41.180066 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:23:41.180076 | orchestrator | 2026-04-16 06:23:41.180085 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-04-16 06:23:41.180096 | orchestrator | Thursday 16 April 2026 06:23:16 +0000 (0:00:03.283) 0:00:14.300 ******** 2026-04-16 06:23:41.180107 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:23:41.180134 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-04-16 06:23:41.180146 | orchestrator | 2026-04-16 06:23:41.180157 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-04-16 06:23:41.180169 | orchestrator | Thursday 16 April 2026 06:23:20 +0000 (0:00:03.934) 0:00:18.235 ******** 2026-04-16 06:23:41.180180 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:23:41.180192 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-04-16 06:23:41.180223 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-04-16 06:23:41.180235 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-04-16 06:23:41.180246 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-04-16 06:23:41.180257 | orchestrator | 2026-04-16 06:23:41.180268 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-04-16 06:23:41.180279 | orchestrator | Thursday 16 April 2026 06:23:35 +0000 (0:00:15.761) 0:00:33.997 ******** 2026-04-16 06:23:41.180290 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-04-16 06:23:41.180302 | orchestrator | 2026-04-16 06:23:41.180313 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-16 06:23:41.180324 | orchestrator | Thursday 16 April 2026 06:23:39 +0000 (0:00:03.863) 0:00:37.861 ******** 2026-04-16 06:23:41.180340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:41.180371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:41.180383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:41.180399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:41.180422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:41.180433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:41.180452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726200 | orchestrator | 2026-04-16 06:23:46.726210 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-16 06:23:46.726219 | orchestrator | Thursday 16 April 2026 06:23:41 +0000 (0:00:01.506) 0:00:39.367 ******** 2026-04-16 06:23:46.726227 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-16 06:23:46.726234 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-16 06:23:46.726259 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-16 06:23:46.726267 | orchestrator | 2026-04-16 06:23:46.726275 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-16 06:23:46.726282 | orchestrator | Thursday 16 April 2026 06:23:42 +0000 (0:00:01.093) 0:00:40.461 ******** 2026-04-16 06:23:46.726289 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:23:46.726297 | orchestrator | 2026-04-16 06:23:46.726315 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-16 06:23:46.726323 | orchestrator | Thursday 16 April 2026 06:23:42 +0000 (0:00:00.301) 0:00:40.763 ******** 2026-04-16 06:23:46.726330 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:23:46.726337 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:23:46.726345 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:23:46.726352 | orchestrator | 2026-04-16 06:23:46.726359 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-16 06:23:46.726366 | orchestrator | Thursday 16 April 2026 06:23:42 +0000 (0:00:00.280) 0:00:41.043 ******** 2026-04-16 06:23:46.726374 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:23:46.726381 | orchestrator | 2026-04-16 06:23:46.726389 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-16 06:23:46.726396 | orchestrator | Thursday 16 April 2026 06:23:43 +0000 (0:00:00.522) 0:00:41.566 ******** 2026-04-16 06:23:46.726405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:46.726427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:46.726435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:46.726454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:46.726512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:48.108783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:23:48.108914 | orchestrator | 2026-04-16 06:23:48.108932 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-16 06:23:48.108946 | orchestrator | Thursday 16 April 2026 06:23:46 +0000 (0:00:03.346) 0:00:44.913 ******** 2026-04-16 06:23:48.108974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:23:48.108988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:23:48.109000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:23:48.109012 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:23:48.109025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:23:48.109056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:23:48.109076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:23:48.109088 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:23:48.109105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:23:48.109117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:23:48.109128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:23:48.109140 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:23:48.109151 | orchestrator | 2026-04-16 06:23:48.109162 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-16 06:23:48.109173 | orchestrator | Thursday 16 April 2026 06:23:47 +0000 (0:00:00.577) 0:00:45.490 ******** 2026-04-16 06:23:48.109194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:23:51.443145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:23:51.444346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:23:51.444447 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:23:51.444475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:23:51.444496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:23:51.444515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:23:51.444563 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:23:51.444618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:23:51.444651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:23:51.444671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:23:51.444690 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:23:51.444710 | orchestrator | 2026-04-16 06:23:51.444794 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-16 06:23:51.444817 | orchestrator | Thursday 16 April 2026 06:23:48 +0000 (0:00:00.815) 0:00:46.306 ******** 2026-04-16 06:23:51.444837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:51.444860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:23:51.444910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:00.415933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:00.416050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:00.416069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:00.416082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:00.416120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:00.416132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:00.416144 | orchestrator | 2026-04-16 06:24:00.416156 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-16 06:24:00.416169 | orchestrator | Thursday 16 April 2026 06:23:51 +0000 (0:00:03.328) 0:00:49.635 ******** 2026-04-16 06:24:00.416180 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:24:00.416193 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:00.416204 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:24:00.416215 | orchestrator | 2026-04-16 06:24:00.416245 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-16 06:24:00.416257 | orchestrator | Thursday 16 April 2026 06:23:52 +0000 (0:00:01.494) 0:00:51.130 ******** 2026-04-16 06:24:00.416268 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:24:00.416279 | orchestrator | 2026-04-16 06:24:00.416290 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-16 06:24:00.416301 | orchestrator | Thursday 16 April 2026 06:23:53 +0000 (0:00:00.885) 0:00:52.015 ******** 2026-04-16 06:24:00.416312 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:24:00.416323 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:24:00.416334 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:24:00.416344 | orchestrator | 2026-04-16 06:24:00.416355 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-16 06:24:00.416366 | orchestrator | Thursday 16 April 2026 06:23:54 +0000 (0:00:00.539) 0:00:52.555 ******** 2026-04-16 06:24:00.416409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:00.416436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:00.416451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:00.416472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:01.269496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:01.269614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:01.269637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:01.269680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:01.269698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:01.269771 | orchestrator | 2026-04-16 06:24:01.269792 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-16 06:24:01.269810 | orchestrator | Thursday 16 April 2026 06:24:00 +0000 (0:00:06.057) 0:00:58.613 ******** 2026-04-16 06:24:01.269849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:24:01.269877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:24:01.269895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:24:01.269926 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:24:01.269946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:24:01.269963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:24:01.269980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:24:01.269997 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:24:01.270101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-04-16 06:24:03.555372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:24:03.555523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:24:03.555549 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:24:03.555569 | orchestrator | 2026-04-16 06:24:03.555587 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-04-16 06:24:03.555605 | orchestrator | Thursday 16 April 2026 06:24:01 +0000 (0:00:00.852) 0:00:59.466 ******** 2026-04-16 06:24:03.555623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:03.555644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:03.555698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-04-16 06:24:03.555754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:03.555775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:03.555792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:03.555809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:03.555828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:03.555850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:24:03.555872 | orchestrator | 2026-04-16 06:24:03.555884 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-16 06:24:03.555904 | orchestrator | Thursday 16 April 2026 06:24:03 +0000 (0:00:02.280) 0:01:01.746 ******** 2026-04-16 06:24:36.575013 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:24:36.575112 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:24:36.575122 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:24:36.575130 | orchestrator | 2026-04-16 06:24:36.575137 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-04-16 06:24:36.575146 | orchestrator | Thursday 16 April 2026 06:24:03 +0000 (0:00:00.286) 0:01:02.033 ******** 2026-04-16 06:24:36.575153 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:36.575160 | orchestrator | 2026-04-16 06:24:36.575167 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-04-16 06:24:36.575174 | orchestrator | Thursday 16 April 2026 06:24:05 +0000 (0:00:02.112) 0:01:04.145 ******** 2026-04-16 06:24:36.575180 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:36.575187 | orchestrator | 2026-04-16 06:24:36.575194 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-16 06:24:36.575201 | orchestrator | Thursday 16 April 2026 06:24:08 +0000 (0:00:02.198) 0:01:06.344 ******** 2026-04-16 06:24:36.575208 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:36.575215 | orchestrator | 2026-04-16 06:24:36.575221 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-16 06:24:36.575228 | orchestrator | Thursday 16 April 2026 06:24:20 +0000 (0:00:11.931) 0:01:18.276 ******** 2026-04-16 06:24:36.575235 | orchestrator | 2026-04-16 06:24:36.575242 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-16 06:24:36.575248 | orchestrator | Thursday 16 April 2026 06:24:20 +0000 (0:00:00.065) 0:01:18.341 ******** 2026-04-16 06:24:36.575255 | orchestrator | 2026-04-16 06:24:36.575262 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-16 06:24:36.575268 | orchestrator | Thursday 16 April 2026 06:24:20 +0000 (0:00:00.065) 0:01:18.407 ******** 2026-04-16 06:24:36.575275 | orchestrator | 2026-04-16 06:24:36.575282 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-16 06:24:36.575289 | orchestrator | Thursday 16 April 2026 06:24:20 +0000 (0:00:00.068) 0:01:18.475 ******** 2026-04-16 06:24:36.575295 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:36.575302 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:24:36.575309 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:24:36.575316 | orchestrator | 2026-04-16 06:24:36.575336 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-16 06:24:36.575351 | orchestrator | Thursday 16 April 2026 06:24:26 +0000 (0:00:06.259) 0:01:24.735 ******** 2026-04-16 06:24:36.575358 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:36.575365 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:24:36.575372 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:24:36.575378 | orchestrator | 2026-04-16 06:24:36.575385 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-16 06:24:36.575391 | orchestrator | Thursday 16 April 2026 06:24:31 +0000 (0:00:04.593) 0:01:29.328 ******** 2026-04-16 06:24:36.575398 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:24:36.575405 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:24:36.575411 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:24:36.575418 | orchestrator | 2026-04-16 06:24:36.575425 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:24:36.575433 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:24:36.575441 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:24:36.575447 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:24:36.575473 | orchestrator | 2026-04-16 06:24:36.575481 | orchestrator | 2026-04-16 06:24:36.575487 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:24:36.575494 | orchestrator | Thursday 16 April 2026 06:24:36 +0000 (0:00:05.132) 0:01:34.461 ******** 2026-04-16 06:24:36.575501 | orchestrator | =============================================================================== 2026-04-16 06:24:36.575507 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.76s 2026-04-16 06:24:36.575514 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.93s 2026-04-16 06:24:36.575521 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.46s 2026-04-16 06:24:36.575527 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.26s 2026-04-16 06:24:36.575534 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.06s 2026-04-16 06:24:36.575540 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.13s 2026-04-16 06:24:36.575547 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.59s 2026-04-16 06:24:36.575553 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.93s 2026-04-16 06:24:36.575560 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.86s 2026-04-16 06:24:36.575566 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.43s 2026-04-16 06:24:36.575573 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.35s 2026-04-16 06:24:36.575580 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.33s 2026-04-16 06:24:36.575598 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.28s 2026-04-16 06:24:36.575605 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.28s 2026-04-16 06:24:36.575611 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.20s 2026-04-16 06:24:36.575633 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.11s 2026-04-16 06:24:36.575640 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.51s 2026-04-16 06:24:36.575646 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.49s 2026-04-16 06:24:36.575653 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.09s 2026-04-16 06:24:36.575660 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.89s 2026-04-16 06:24:38.778394 | orchestrator | 2026-04-16 06:24:38 | INFO  | Task cd0f9c82-a29a-4587-9773-cff340f0ff19 (designate) was prepared for execution. 2026-04-16 06:24:38.778490 | orchestrator | 2026-04-16 06:24:38 | INFO  | It takes a moment until task cd0f9c82-a29a-4587-9773-cff340f0ff19 (designate) has been started and output is visible here. 2026-04-16 06:25:09.406616 | orchestrator | 2026-04-16 06:25:09.406798 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:25:09.406821 | orchestrator | 2026-04-16 06:25:09.406833 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:25:09.406845 | orchestrator | Thursday 16 April 2026 06:24:42 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-16 06:25:09.406856 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:25:09.406867 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:25:09.406878 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:25:09.406889 | orchestrator | 2026-04-16 06:25:09.406900 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:25:09.406911 | orchestrator | Thursday 16 April 2026 06:24:42 +0000 (0:00:00.239) 0:00:00.427 ******** 2026-04-16 06:25:09.406922 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-16 06:25:09.406934 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-16 06:25:09.406944 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-16 06:25:09.406977 | orchestrator | 2026-04-16 06:25:09.406989 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-16 06:25:09.407000 | orchestrator | 2026-04-16 06:25:09.407011 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 06:25:09.407022 | orchestrator | Thursday 16 April 2026 06:24:43 +0000 (0:00:00.329) 0:00:00.756 ******** 2026-04-16 06:25:09.407033 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:25:09.407044 | orchestrator | 2026-04-16 06:25:09.407054 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-04-16 06:25:09.407065 | orchestrator | Thursday 16 April 2026 06:24:43 +0000 (0:00:00.475) 0:00:01.231 ******** 2026-04-16 06:25:09.407076 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-04-16 06:25:09.407087 | orchestrator | 2026-04-16 06:25:09.407098 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-04-16 06:25:09.407108 | orchestrator | Thursday 16 April 2026 06:24:47 +0000 (0:00:03.378) 0:00:04.610 ******** 2026-04-16 06:25:09.407119 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-04-16 06:25:09.407130 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-04-16 06:25:09.407141 | orchestrator | 2026-04-16 06:25:09.407151 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-04-16 06:25:09.407165 | orchestrator | Thursday 16 April 2026 06:24:53 +0000 (0:00:06.147) 0:00:10.757 ******** 2026-04-16 06:25:09.407178 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:25:09.407191 | orchestrator | 2026-04-16 06:25:09.407203 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-04-16 06:25:09.407216 | orchestrator | Thursday 16 April 2026 06:24:56 +0000 (0:00:03.142) 0:00:13.899 ******** 2026-04-16 06:25:09.407228 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:25:09.407241 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-04-16 06:25:09.407254 | orchestrator | 2026-04-16 06:25:09.407266 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-04-16 06:25:09.407279 | orchestrator | Thursday 16 April 2026 06:25:00 +0000 (0:00:03.900) 0:00:17.799 ******** 2026-04-16 06:25:09.407292 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:25:09.407305 | orchestrator | 2026-04-16 06:25:09.407318 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-04-16 06:25:09.407331 | orchestrator | Thursday 16 April 2026 06:25:03 +0000 (0:00:03.309) 0:00:21.109 ******** 2026-04-16 06:25:09.407344 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-04-16 06:25:09.407356 | orchestrator | 2026-04-16 06:25:09.407369 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-16 06:25:09.407382 | orchestrator | Thursday 16 April 2026 06:25:07 +0000 (0:00:03.994) 0:00:25.104 ******** 2026-04-16 06:25:09.407410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:09.407448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:09.407473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:09.407485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:09.407497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:09.407514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:09.407526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:09.407551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.303894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.303993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:15.304164 | orchestrator | 2026-04-16 06:25:15.304179 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-16 06:25:15.304194 | orchestrator | Thursday 16 April 2026 06:25:10 +0000 (0:00:02.521) 0:00:27.625 ******** 2026-04-16 06:25:15.304209 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:15.304224 | orchestrator | 2026-04-16 06:25:15.304240 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-16 06:25:15.304254 | orchestrator | Thursday 16 April 2026 06:25:10 +0000 (0:00:00.134) 0:00:27.760 ******** 2026-04-16 06:25:15.304266 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:15.304280 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:25:15.304293 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:25:15.304318 | orchestrator | 2026-04-16 06:25:15.304333 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 06:25:15.304348 | orchestrator | Thursday 16 April 2026 06:25:10 +0000 (0:00:00.486) 0:00:28.247 ******** 2026-04-16 06:25:15.304370 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:25:15.304381 | orchestrator | 2026-04-16 06:25:15.304391 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-16 06:25:15.304400 | orchestrator | Thursday 16 April 2026 06:25:11 +0000 (0:00:00.529) 0:00:28.776 ******** 2026-04-16 06:25:15.304411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:15.304430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:17.055368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:17.055517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:17.055847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:18.039592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:18.039695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:18.039788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:18.039814 | orchestrator | 2026-04-16 06:25:18.039829 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-16 06:25:18.039842 | orchestrator | Thursday 16 April 2026 06:25:17 +0000 (0:00:05.792) 0:00:34.568 ******** 2026-04-16 06:25:18.039869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:18.039882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:18.039912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.039925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.039937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.039961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.039972 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:18.039991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:18.040003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:18.040014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.040034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.762954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763079 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:25:18.763111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:18.763126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:18.763138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763231 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:25:18.763241 | orchestrator | 2026-04-16 06:25:18.763252 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-16 06:25:18.763265 | orchestrator | Thursday 16 April 2026 06:25:18 +0000 (0:00:01.084) 0:00:35.653 ******** 2026-04-16 06:25:18.763282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:18.763293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:18.763302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:18.763319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070240 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:19.070267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:19.070276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:19.070283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070342 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:25:19.070352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:19.070359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:19.070365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:19.070388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:23.302663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:23.302866 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:25:23.302888 | orchestrator | 2026-04-16 06:25:23.302901 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-16 06:25:23.302913 | orchestrator | Thursday 16 April 2026 06:25:19 +0000 (0:00:00.929) 0:00:36.582 ******** 2026-04-16 06:25:23.302943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:23.302957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:23.302991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:23.303024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:23.303131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134717 | orchestrator | 2026-04-16 06:25:34.134752 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-16 06:25:34.134764 | orchestrator | Thursday 16 April 2026 06:25:25 +0000 (0:00:05.956) 0:00:42.539 ******** 2026-04-16 06:25:34.134782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:34.134793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:34.134812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:34.134823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:34.134843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.785996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.786007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.786064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:41.786080 | orchestrator | 2026-04-16 06:25:41.786092 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-16 06:25:41.786105 | orchestrator | Thursday 16 April 2026 06:25:38 +0000 (0:00:13.352) 0:00:55.891 ******** 2026-04-16 06:25:41.786125 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-16 06:25:45.861706 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-16 06:25:45.861898 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-16 06:25:45.861926 | orchestrator | 2026-04-16 06:25:45.861940 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-16 06:25:45.861952 | orchestrator | Thursday 16 April 2026 06:25:41 +0000 (0:00:03.408) 0:00:59.300 ******** 2026-04-16 06:25:45.861981 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-16 06:25:45.861992 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-16 06:25:45.862087 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-16 06:25:45.862103 | orchestrator | 2026-04-16 06:25:45.862115 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-16 06:25:45.862126 | orchestrator | Thursday 16 April 2026 06:25:44 +0000 (0:00:02.309) 0:01:01.610 ******** 2026-04-16 06:25:45.862143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:45.862162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:45.862175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:45.862211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:45.862233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:45.862255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:45.862269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:45.862282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:45.862295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:45.862308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:45.862331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:48.572148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:48.572235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:48.572245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:48.572252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:48.572259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:48.572265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:48.572288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:48.572312 | orchestrator | 2026-04-16 06:25:48.572319 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-16 06:25:48.572326 | orchestrator | Thursday 16 April 2026 06:25:46 +0000 (0:00:02.820) 0:01:04.430 ******** 2026-04-16 06:25:48.572334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:48.572341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:48.572347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:48.572354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:48.572372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.556932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:49.557069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:49.557175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:49.557209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:49.557221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:49.557240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:49.557252 | orchestrator | 2026-04-16 06:25:49.557269 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 06:25:49.557290 | orchestrator | Thursday 16 April 2026 06:25:49 +0000 (0:00:02.634) 0:01:07.065 ******** 2026-04-16 06:25:50.462910 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:50.463000 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:25:50.463011 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:25:50.463020 | orchestrator | 2026-04-16 06:25:50.463028 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-16 06:25:50.463038 | orchestrator | Thursday 16 April 2026 06:25:49 +0000 (0:00:00.295) 0:01:07.361 ******** 2026-04-16 06:25:50.463051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:50.463064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:50.463074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463158 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:50.463167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:50.463176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:50.463184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:50.463224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:53.712578 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:25:53.712695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-04-16 06:25:53.712717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 06:25:53.712774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 06:25:53.712813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 06:25:53.712826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 06:25:53.712851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:25:53.712863 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:25:53.712875 | orchestrator | 2026-04-16 06:25:53.712903 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-04-16 06:25:53.712916 | orchestrator | Thursday 16 April 2026 06:25:50 +0000 (0:00:00.726) 0:01:08.088 ******** 2026-04-16 06:25:53.712928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:53.712941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:53.712961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-04-16 06:25:53.712972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:53.712996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:25:55.410880 | orchestrator | 2026-04-16 06:25:55.410887 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 06:25:55.410894 | orchestrator | Thursday 16 April 2026 06:25:55 +0000 (0:00:04.545) 0:01:12.633 ******** 2026-04-16 06:25:55.410899 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:25:55.410909 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:27:17.545027 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:27:17.545137 | orchestrator | 2026-04-16 06:27:17.545152 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-04-16 06:27:17.545165 | orchestrator | Thursday 16 April 2026 06:25:55 +0000 (0:00:00.289) 0:01:12.923 ******** 2026-04-16 06:27:17.545175 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-04-16 06:27:17.545185 | orchestrator | 2026-04-16 06:27:17.545195 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-04-16 06:27:17.545205 | orchestrator | Thursday 16 April 2026 06:25:57 +0000 (0:00:02.029) 0:01:14.953 ******** 2026-04-16 06:27:17.545215 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-04-16 06:27:17.545225 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-04-16 06:27:17.545235 | orchestrator | 2026-04-16 06:27:17.545245 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-16 06:27:17.545254 | orchestrator | Thursday 16 April 2026 06:25:59 +0000 (0:00:02.150) 0:01:17.103 ******** 2026-04-16 06:27:17.545264 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545295 | orchestrator | 2026-04-16 06:27:17.545305 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-16 06:27:17.545315 | orchestrator | Thursday 16 April 2026 06:26:14 +0000 (0:00:15.177) 0:01:32.281 ******** 2026-04-16 06:27:17.545324 | orchestrator | 2026-04-16 06:27:17.545334 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-16 06:27:17.545343 | orchestrator | Thursday 16 April 2026 06:26:14 +0000 (0:00:00.066) 0:01:32.347 ******** 2026-04-16 06:27:17.545353 | orchestrator | 2026-04-16 06:27:17.545362 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-16 06:27:17.545373 | orchestrator | Thursday 16 April 2026 06:26:14 +0000 (0:00:00.070) 0:01:32.418 ******** 2026-04-16 06:27:17.545382 | orchestrator | 2026-04-16 06:27:17.545392 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-16 06:27:17.545401 | orchestrator | Thursday 16 April 2026 06:26:14 +0000 (0:00:00.071) 0:01:32.490 ******** 2026-04-16 06:27:17.545411 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545420 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:27:17.545429 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:27:17.545439 | orchestrator | 2026-04-16 06:27:17.545448 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-16 06:27:17.545458 | orchestrator | Thursday 16 April 2026 06:26:27 +0000 (0:00:12.520) 0:01:45.010 ******** 2026-04-16 06:27:17.545467 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:27:17.545477 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545486 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:27:17.545495 | orchestrator | 2026-04-16 06:27:17.545522 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-16 06:27:17.545543 | orchestrator | Thursday 16 April 2026 06:26:37 +0000 (0:00:10.241) 0:01:55.251 ******** 2026-04-16 06:27:17.545553 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545562 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:27:17.545574 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:27:17.545584 | orchestrator | 2026-04-16 06:27:17.545595 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-16 06:27:17.545606 | orchestrator | Thursday 16 April 2026 06:26:43 +0000 (0:00:05.394) 0:02:00.646 ******** 2026-04-16 06:27:17.545617 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545627 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:27:17.545638 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:27:17.545648 | orchestrator | 2026-04-16 06:27:17.545659 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-16 06:27:17.545670 | orchestrator | Thursday 16 April 2026 06:26:48 +0000 (0:00:05.413) 0:02:06.060 ******** 2026-04-16 06:27:17.545681 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545691 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:27:17.545703 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:27:17.545713 | orchestrator | 2026-04-16 06:27:17.545724 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-16 06:27:17.545764 | orchestrator | Thursday 16 April 2026 06:26:59 +0000 (0:00:10.880) 0:02:16.940 ******** 2026-04-16 06:27:17.545775 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545786 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:27:17.545796 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:27:17.545807 | orchestrator | 2026-04-16 06:27:17.545817 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-16 06:27:17.545828 | orchestrator | Thursday 16 April 2026 06:27:10 +0000 (0:00:10.707) 0:02:27.648 ******** 2026-04-16 06:27:17.545839 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:27:17.545849 | orchestrator | 2026-04-16 06:27:17.545860 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:27:17.545873 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:27:17.545894 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:27:17.545905 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:27:17.545915 | orchestrator | 2026-04-16 06:27:17.545925 | orchestrator | 2026-04-16 06:27:17.545934 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:27:17.545944 | orchestrator | Thursday 16 April 2026 06:27:17 +0000 (0:00:07.063) 0:02:34.712 ******** 2026-04-16 06:27:17.545967 | orchestrator | =============================================================================== 2026-04-16 06:27:17.545976 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.18s 2026-04-16 06:27:17.545986 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.35s 2026-04-16 06:27:17.546012 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.52s 2026-04-16 06:27:17.546069 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.88s 2026-04-16 06:27:17.546079 | orchestrator | designate : Restart designate-worker container ------------------------- 10.71s 2026-04-16 06:27:17.546088 | orchestrator | designate : Restart designate-api container ---------------------------- 10.24s 2026-04-16 06:27:17.546098 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.06s 2026-04-16 06:27:17.546107 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.15s 2026-04-16 06:27:17.546117 | orchestrator | designate : Copying over config.json files for services ----------------- 5.96s 2026-04-16 06:27:17.546126 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.79s 2026-04-16 06:27:17.546136 | orchestrator | designate : Restart designate-producer container ------------------------ 5.41s 2026-04-16 06:27:17.546145 | orchestrator | designate : Restart designate-central container ------------------------- 5.39s 2026-04-16 06:27:17.546155 | orchestrator | designate : Check designate containers ---------------------------------- 4.55s 2026-04-16 06:27:17.546164 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.99s 2026-04-16 06:27:17.546174 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.90s 2026-04-16 06:27:17.546183 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.41s 2026-04-16 06:27:17.546193 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.38s 2026-04-16 06:27:17.546202 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.31s 2026-04-16 06:27:17.546212 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.14s 2026-04-16 06:27:17.546221 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.82s 2026-04-16 06:27:19.795288 | orchestrator | 2026-04-16 06:27:19 | INFO  | Task 97042145-9e6c-4d1e-99cf-5af14ce8cd99 (octavia) was prepared for execution. 2026-04-16 06:27:19.795385 | orchestrator | 2026-04-16 06:27:19 | INFO  | It takes a moment until task 97042145-9e6c-4d1e-99cf-5af14ce8cd99 (octavia) has been started and output is visible here. 2026-04-16 06:29:20.858679 | orchestrator | 2026-04-16 06:29:20.858937 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:29:20.858967 | orchestrator | 2026-04-16 06:29:20.858986 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:29:20.859005 | orchestrator | Thursday 16 April 2026 06:27:23 +0000 (0:00:00.187) 0:00:00.187 ******** 2026-04-16 06:29:20.859023 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:20.859041 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:29:20.859058 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:29:20.859075 | orchestrator | 2026-04-16 06:29:20.859092 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:29:20.859110 | orchestrator | Thursday 16 April 2026 06:27:23 +0000 (0:00:00.233) 0:00:00.420 ******** 2026-04-16 06:29:20.859163 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-16 06:29:20.859185 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-16 06:29:20.859205 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-16 06:29:20.859223 | orchestrator | 2026-04-16 06:29:20.859244 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-16 06:29:20.859263 | orchestrator | 2026-04-16 06:29:20.859282 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 06:29:20.859303 | orchestrator | Thursday 16 April 2026 06:27:24 +0000 (0:00:00.332) 0:00:00.752 ******** 2026-04-16 06:29:20.859324 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:29:20.859345 | orchestrator | 2026-04-16 06:29:20.859366 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-04-16 06:29:20.859384 | orchestrator | Thursday 16 April 2026 06:27:24 +0000 (0:00:00.470) 0:00:01.223 ******** 2026-04-16 06:29:20.859404 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-04-16 06:29:20.859422 | orchestrator | 2026-04-16 06:29:20.859444 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-04-16 06:29:20.859484 | orchestrator | Thursday 16 April 2026 06:27:27 +0000 (0:00:03.325) 0:00:04.549 ******** 2026-04-16 06:29:20.859503 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-04-16 06:29:20.859523 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-04-16 06:29:20.859543 | orchestrator | 2026-04-16 06:29:20.859563 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-04-16 06:29:20.859580 | orchestrator | Thursday 16 April 2026 06:27:34 +0000 (0:00:06.359) 0:00:10.908 ******** 2026-04-16 06:29:20.859599 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:29:20.859617 | orchestrator | 2026-04-16 06:29:20.859635 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-04-16 06:29:20.859654 | orchestrator | Thursday 16 April 2026 06:27:37 +0000 (0:00:03.156) 0:00:14.064 ******** 2026-04-16 06:29:20.859672 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:29:20.859708 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-16 06:29:20.859727 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-04-16 06:29:20.859776 | orchestrator | 2026-04-16 06:29:20.859795 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-04-16 06:29:20.859813 | orchestrator | Thursday 16 April 2026 06:27:45 +0000 (0:00:08.286) 0:00:22.351 ******** 2026-04-16 06:29:20.859831 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:29:20.859850 | orchestrator | 2026-04-16 06:29:20.859868 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-04-16 06:29:20.859887 | orchestrator | Thursday 16 April 2026 06:27:48 +0000 (0:00:03.205) 0:00:25.556 ******** 2026-04-16 06:29:20.859906 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-16 06:29:20.859924 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-04-16 06:29:20.859942 | orchestrator | 2026-04-16 06:29:20.859960 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-04-16 06:29:20.859978 | orchestrator | Thursday 16 April 2026 06:27:56 +0000 (0:00:07.239) 0:00:32.795 ******** 2026-04-16 06:29:20.859997 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-04-16 06:29:20.860015 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-04-16 06:29:20.860033 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-04-16 06:29:20.860052 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-04-16 06:29:20.860071 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-04-16 06:29:20.860106 | orchestrator | 2026-04-16 06:29:20.860125 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 06:29:20.860144 | orchestrator | Thursday 16 April 2026 06:28:11 +0000 (0:00:15.154) 0:00:47.949 ******** 2026-04-16 06:29:20.860163 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:29:20.860181 | orchestrator | 2026-04-16 06:29:20.860200 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-04-16 06:29:20.860218 | orchestrator | Thursday 16 April 2026 06:28:12 +0000 (0:00:00.740) 0:00:48.690 ******** 2026-04-16 06:29:20.860237 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.860255 | orchestrator | 2026-04-16 06:29:20.860274 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-04-16 06:29:20.860293 | orchestrator | Thursday 16 April 2026 06:28:16 +0000 (0:00:04.902) 0:00:53.592 ******** 2026-04-16 06:29:20.860312 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.860330 | orchestrator | 2026-04-16 06:29:20.860349 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-16 06:29:20.860396 | orchestrator | Thursday 16 April 2026 06:28:20 +0000 (0:00:03.802) 0:00:57.395 ******** 2026-04-16 06:29:20.860416 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:20.860435 | orchestrator | 2026-04-16 06:29:20.860453 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-04-16 06:29:20.860471 | orchestrator | Thursday 16 April 2026 06:28:23 +0000 (0:00:03.051) 0:01:00.446 ******** 2026-04-16 06:29:20.860490 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-16 06:29:20.860509 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-16 06:29:20.860527 | orchestrator | 2026-04-16 06:29:20.860545 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-04-16 06:29:20.860563 | orchestrator | Thursday 16 April 2026 06:28:33 +0000 (0:00:09.214) 0:01:09.660 ******** 2026-04-16 06:29:20.860581 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-04-16 06:29:20.860601 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-04-16 06:29:20.860621 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-04-16 06:29:20.860647 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-04-16 06:29:20.860665 | orchestrator | 2026-04-16 06:29:20.860684 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-04-16 06:29:20.860703 | orchestrator | Thursday 16 April 2026 06:28:48 +0000 (0:00:15.526) 0:01:25.186 ******** 2026-04-16 06:29:20.860721 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.860771 | orchestrator | 2026-04-16 06:29:20.860790 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-04-16 06:29:20.860809 | orchestrator | Thursday 16 April 2026 06:28:53 +0000 (0:00:04.656) 0:01:29.843 ******** 2026-04-16 06:29:20.860827 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.860845 | orchestrator | 2026-04-16 06:29:20.860864 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-04-16 06:29:20.860883 | orchestrator | Thursday 16 April 2026 06:28:58 +0000 (0:00:05.531) 0:01:35.374 ******** 2026-04-16 06:29:20.860902 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:29:20.860921 | orchestrator | 2026-04-16 06:29:20.860940 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-04-16 06:29:20.860958 | orchestrator | Thursday 16 April 2026 06:28:58 +0000 (0:00:00.204) 0:01:35.578 ******** 2026-04-16 06:29:20.860977 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:20.860996 | orchestrator | 2026-04-16 06:29:20.861014 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 06:29:20.861045 | orchestrator | Thursday 16 April 2026 06:29:03 +0000 (0:00:04.269) 0:01:39.847 ******** 2026-04-16 06:29:20.861073 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:29:20.861092 | orchestrator | 2026-04-16 06:29:20.861112 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-04-16 06:29:20.861130 | orchestrator | Thursday 16 April 2026 06:29:04 +0000 (0:00:01.094) 0:01:40.942 ******** 2026-04-16 06:29:20.861148 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.861166 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:20.861185 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:20.861204 | orchestrator | 2026-04-16 06:29:20.861222 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-04-16 06:29:20.861241 | orchestrator | Thursday 16 April 2026 06:29:09 +0000 (0:00:04.864) 0:01:45.807 ******** 2026-04-16 06:29:20.861259 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:20.861277 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.861296 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:20.861314 | orchestrator | 2026-04-16 06:29:20.861332 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-04-16 06:29:20.861351 | orchestrator | Thursday 16 April 2026 06:29:13 +0000 (0:00:04.213) 0:01:50.021 ******** 2026-04-16 06:29:20.861369 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.861387 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:20.861405 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:20.861423 | orchestrator | 2026-04-16 06:29:20.861441 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-04-16 06:29:20.861459 | orchestrator | Thursday 16 April 2026 06:29:14 +0000 (0:00:00.972) 0:01:50.993 ******** 2026-04-16 06:29:20.861479 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:29:20.861497 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:29:20.861516 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:20.861534 | orchestrator | 2026-04-16 06:29:20.861553 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-04-16 06:29:20.861572 | orchestrator | Thursday 16 April 2026 06:29:16 +0000 (0:00:01.894) 0:01:52.887 ******** 2026-04-16 06:29:20.861591 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:20.861609 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:20.861627 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.861709 | orchestrator | 2026-04-16 06:29:20.861757 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-04-16 06:29:20.861778 | orchestrator | Thursday 16 April 2026 06:29:17 +0000 (0:00:01.245) 0:01:54.133 ******** 2026-04-16 06:29:20.861795 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.861813 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:20.861831 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:20.861849 | orchestrator | 2026-04-16 06:29:20.861866 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-04-16 06:29:20.861884 | orchestrator | Thursday 16 April 2026 06:29:18 +0000 (0:00:01.167) 0:01:55.301 ******** 2026-04-16 06:29:20.861900 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:20.861919 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:20.861937 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:20.861955 | orchestrator | 2026-04-16 06:29:20.861990 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-04-16 06:29:46.213584 | orchestrator | Thursday 16 April 2026 06:29:20 +0000 (0:00:02.196) 0:01:57.498 ******** 2026-04-16 06:29:46.213725 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:29:46.213810 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:29:46.213830 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:29:46.213850 | orchestrator | 2026-04-16 06:29:46.213870 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-04-16 06:29:46.213889 | orchestrator | Thursday 16 April 2026 06:29:22 +0000 (0:00:01.464) 0:01:58.962 ******** 2026-04-16 06:29:46.213936 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:46.213957 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:29:46.213974 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:29:46.213990 | orchestrator | 2026-04-16 06:29:46.214008 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-04-16 06:29:46.214102 | orchestrator | Thursday 16 April 2026 06:29:22 +0000 (0:00:00.614) 0:01:59.577 ******** 2026-04-16 06:29:46.214124 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:29:46.214144 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:46.214165 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:29:46.214184 | orchestrator | 2026-04-16 06:29:46.214204 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 06:29:46.214224 | orchestrator | Thursday 16 April 2026 06:29:25 +0000 (0:00:02.891) 0:02:02.469 ******** 2026-04-16 06:29:46.214245 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:29:46.214265 | orchestrator | 2026-04-16 06:29:46.214284 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-16 06:29:46.214304 | orchestrator | Thursday 16 April 2026 06:29:26 +0000 (0:00:00.498) 0:02:02.968 ******** 2026-04-16 06:29:46.214325 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:46.214347 | orchestrator | 2026-04-16 06:29:46.214368 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-16 06:29:46.214388 | orchestrator | Thursday 16 April 2026 06:29:30 +0000 (0:00:03.712) 0:02:06.680 ******** 2026-04-16 06:29:46.214401 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:46.214420 | orchestrator | 2026-04-16 06:29:46.214439 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-16 06:29:46.214459 | orchestrator | Thursday 16 April 2026 06:29:33 +0000 (0:00:03.025) 0:02:09.705 ******** 2026-04-16 06:29:46.214479 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-16 06:29:46.214496 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-16 06:29:46.214508 | orchestrator | 2026-04-16 06:29:46.214518 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-16 06:29:46.214529 | orchestrator | Thursday 16 April 2026 06:29:40 +0000 (0:00:07.465) 0:02:17.170 ******** 2026-04-16 06:29:46.214539 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:46.214550 | orchestrator | 2026-04-16 06:29:46.214567 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-16 06:29:46.214584 | orchestrator | Thursday 16 April 2026 06:29:43 +0000 (0:00:03.363) 0:02:20.533 ******** 2026-04-16 06:29:46.214601 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:29:46.214637 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:29:46.214655 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:29:46.214673 | orchestrator | 2026-04-16 06:29:46.214690 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-16 06:29:46.214709 | orchestrator | Thursday 16 April 2026 06:29:44 +0000 (0:00:00.447) 0:02:20.981 ******** 2026-04-16 06:29:46.214770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:46.214811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:46.214836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:46.214849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:46.214863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:46.214892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:46.214915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:46.214945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:46.214980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:29:47.606973 | orchestrator | 2026-04-16 06:29:47.606986 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-16 06:29:47.606999 | orchestrator | Thursday 16 April 2026 06:29:46 +0000 (0:00:02.288) 0:02:23.270 ******** 2026-04-16 06:29:47.607011 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:29:47.607023 | orchestrator | 2026-04-16 06:29:47.607034 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-16 06:29:47.607045 | orchestrator | Thursday 16 April 2026 06:29:46 +0000 (0:00:00.122) 0:02:23.393 ******** 2026-04-16 06:29:47.607056 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:29:47.607084 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:29:47.607096 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:29:47.607107 | orchestrator | 2026-04-16 06:29:47.607118 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-16 06:29:47.607129 | orchestrator | Thursday 16 April 2026 06:29:47 +0000 (0:00:00.280) 0:02:23.673 ******** 2026-04-16 06:29:47.607141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:47.607160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:47.607174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:47.607194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:47.607206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:47.607217 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:29:47.607237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:52.298012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:52.298150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:52.298172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:52.298199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:52.298208 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:29:52.298218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:52.298227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:52.298249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:52.298257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:52.298268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:52.298282 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:29:52.298289 | orchestrator | 2026-04-16 06:29:52.298297 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 06:29:52.298306 | orchestrator | Thursday 16 April 2026 06:29:47 +0000 (0:00:00.672) 0:02:24.345 ******** 2026-04-16 06:29:52.298314 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:29:52.298321 | orchestrator | 2026-04-16 06:29:52.298329 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-16 06:29:52.298336 | orchestrator | Thursday 16 April 2026 06:29:48 +0000 (0:00:00.687) 0:02:25.033 ******** 2026-04-16 06:29:52.298344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:52.298353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:52.298386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:53.752043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:53.752166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:53.752190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:53.752210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:29:53.752428 | orchestrator | 2026-04-16 06:29:53.752446 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-16 06:29:53.752465 | orchestrator | Thursday 16 April 2026 06:29:53 +0000 (0:00:04.830) 0:02:29.863 ******** 2026-04-16 06:29:53.752494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:53.846558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:53.846655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:53.846670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:53.846682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:53.846694 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:29:53.846708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:53.846769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:53.846814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:53.846827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:53.846838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:53.846850 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:29:53.846861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:53.846873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:53.846893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:53.846917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:54.581811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:54.581917 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:29:54.581934 | orchestrator | 2026-04-16 06:29:54.581947 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-16 06:29:54.581960 | orchestrator | Thursday 16 April 2026 06:29:53 +0000 (0:00:00.633) 0:02:30.497 ******** 2026-04-16 06:29:54.581973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:54.581987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:54.581999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:54.582086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:54.582135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:54.582148 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:29:54.582160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:54.582172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:54.582184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:54.582195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:54.582214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:54.582226 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:29:54.582251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 06:29:59.016873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 06:29:59.016964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 06:29:59.016977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 06:29:59.017004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 06:29:59.017012 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:29:59.017020 | orchestrator | 2026-04-16 06:29:59.017028 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-16 06:29:59.017035 | orchestrator | Thursday 16 April 2026 06:29:55 +0000 (0:00:01.192) 0:02:31.689 ******** 2026-04-16 06:29:59.017043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:59.017073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:59.017081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:29:59.017088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:59.017100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:59.017107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:29:59.017114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:29:59.017127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:30:13.803364 | orchestrator | 2026-04-16 06:30:13.803377 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-16 06:30:13.803390 | orchestrator | Thursday 16 April 2026 06:30:00 +0000 (0:00:04.976) 0:02:36.665 ******** 2026-04-16 06:30:13.803401 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-16 06:30:13.803413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-16 06:30:13.803424 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-16 06:30:13.803434 | orchestrator | 2026-04-16 06:30:13.803445 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-16 06:30:13.803456 | orchestrator | Thursday 16 April 2026 06:30:01 +0000 (0:00:01.506) 0:02:38.172 ******** 2026-04-16 06:30:13.803477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:30:13.803490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:30:13.803507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:30:13.803526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:30:28.396801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:30:28.396917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:30:28.396959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:28.396973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:28.396985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:30:28.397109 | orchestrator | 2026-04-16 06:30:28.397122 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-16 06:30:28.397136 | orchestrator | Thursday 16 April 2026 06:30:16 +0000 (0:00:15.362) 0:02:53.535 ******** 2026-04-16 06:30:28.397147 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:30:28.397159 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:30:28.397170 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:30:28.397181 | orchestrator | 2026-04-16 06:30:28.397192 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-16 06:30:28.397203 | orchestrator | Thursday 16 April 2026 06:30:18 +0000 (0:00:01.730) 0:02:55.266 ******** 2026-04-16 06:30:28.397214 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-16 06:30:28.397225 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-16 06:30:28.397235 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-16 06:30:28.397246 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-16 06:30:28.397257 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-16 06:30:28.397268 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-16 06:30:28.397279 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-16 06:30:28.397298 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-16 06:30:28.397311 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-16 06:30:28.397323 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-16 06:30:28.397335 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-16 06:30:28.397348 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-16 06:30:28.397367 | orchestrator | 2026-04-16 06:30:28.397379 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-16 06:30:28.397392 | orchestrator | Thursday 16 April 2026 06:30:23 +0000 (0:00:04.792) 0:03:00.058 ******** 2026-04-16 06:30:28.397404 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-16 06:30:28.397417 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-16 06:30:28.397436 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-16 06:30:36.541856 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-16 06:30:36.541938 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-16 06:30:36.541946 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-16 06:30:36.541952 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-16 06:30:36.541957 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-16 06:30:36.541963 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-16 06:30:36.541968 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-16 06:30:36.541973 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-16 06:30:36.541979 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-16 06:30:36.541984 | orchestrator | 2026-04-16 06:30:36.541990 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-16 06:30:36.541996 | orchestrator | Thursday 16 April 2026 06:30:28 +0000 (0:00:04.981) 0:03:05.040 ******** 2026-04-16 06:30:36.542002 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-16 06:30:36.542007 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-16 06:30:36.542048 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-16 06:30:36.542055 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-16 06:30:36.542061 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-16 06:30:36.542066 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-16 06:30:36.542072 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-16 06:30:36.542077 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-16 06:30:36.542083 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-16 06:30:36.542088 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-16 06:30:36.542094 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-16 06:30:36.542099 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-16 06:30:36.542105 | orchestrator | 2026-04-16 06:30:36.542111 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-04-16 06:30:36.542117 | orchestrator | Thursday 16 April 2026 06:30:33 +0000 (0:00:05.063) 0:03:10.103 ******** 2026-04-16 06:30:36.542127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:30:36.542148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:30:36.542190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 06:30:36.542198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:30:36.542205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:30:36.542211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 06:30:36.542218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:36.542230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:36.542238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 06:30:36.542248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:31:53.090415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:31:53.090519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 06:31:53.090546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:31:53.090569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:31:53.090620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 06:31:53.090634 | orchestrator | 2026-04-16 06:31:53.090647 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 06:31:53.090659 | orchestrator | Thursday 16 April 2026 06:30:37 +0000 (0:00:03.880) 0:03:13.984 ******** 2026-04-16 06:31:53.090671 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:31:53.090682 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:31:53.090693 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:31:53.090704 | orchestrator | 2026-04-16 06:31:53.090715 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-16 06:31:53.090726 | orchestrator | Thursday 16 April 2026 06:30:37 +0000 (0:00:00.291) 0:03:14.276 ******** 2026-04-16 06:31:53.090766 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.090777 | orchestrator | 2026-04-16 06:31:53.090788 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-16 06:31:53.090799 | orchestrator | Thursday 16 April 2026 06:30:39 +0000 (0:00:02.029) 0:03:16.305 ******** 2026-04-16 06:31:53.090816 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.090835 | orchestrator | 2026-04-16 06:31:53.090854 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-16 06:31:53.090874 | orchestrator | Thursday 16 April 2026 06:30:41 +0000 (0:00:02.015) 0:03:18.320 ******** 2026-04-16 06:31:53.090895 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.090911 | orchestrator | 2026-04-16 06:31:53.090923 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-16 06:31:53.090934 | orchestrator | Thursday 16 April 2026 06:30:43 +0000 (0:00:02.159) 0:03:20.479 ******** 2026-04-16 06:31:53.090961 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.090975 | orchestrator | 2026-04-16 06:31:53.090989 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-16 06:31:53.091002 | orchestrator | Thursday 16 April 2026 06:30:46 +0000 (0:00:02.176) 0:03:22.656 ******** 2026-04-16 06:31:53.091015 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.091029 | orchestrator | 2026-04-16 06:31:53.091042 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-16 06:31:53.091054 | orchestrator | Thursday 16 April 2026 06:31:08 +0000 (0:00:22.020) 0:03:44.676 ******** 2026-04-16 06:31:53.091067 | orchestrator | 2026-04-16 06:31:53.091080 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-16 06:31:53.091093 | orchestrator | Thursday 16 April 2026 06:31:08 +0000 (0:00:00.065) 0:03:44.741 ******** 2026-04-16 06:31:53.091106 | orchestrator | 2026-04-16 06:31:53.091119 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-16 06:31:53.091131 | orchestrator | Thursday 16 April 2026 06:31:08 +0000 (0:00:00.065) 0:03:44.807 ******** 2026-04-16 06:31:53.091142 | orchestrator | 2026-04-16 06:31:53.091152 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-16 06:31:53.091175 | orchestrator | Thursday 16 April 2026 06:31:08 +0000 (0:00:00.064) 0:03:44.871 ******** 2026-04-16 06:31:53.091186 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.091196 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:31:53.091207 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:31:53.091218 | orchestrator | 2026-04-16 06:31:53.091228 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-16 06:31:53.091239 | orchestrator | Thursday 16 April 2026 06:31:23 +0000 (0:00:15.610) 0:04:00.482 ******** 2026-04-16 06:31:53.091250 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.091261 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:31:53.091271 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:31:53.091282 | orchestrator | 2026-04-16 06:31:53.091293 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-16 06:31:53.091304 | orchestrator | Thursday 16 April 2026 06:31:29 +0000 (0:00:05.978) 0:04:06.460 ******** 2026-04-16 06:31:53.091314 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.091325 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:31:53.091336 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:31:53.091346 | orchestrator | 2026-04-16 06:31:53.091357 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-16 06:31:53.091368 | orchestrator | Thursday 16 April 2026 06:31:35 +0000 (0:00:05.229) 0:04:11.690 ******** 2026-04-16 06:31:53.091378 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:31:53.091389 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:31:53.091400 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.091411 | orchestrator | 2026-04-16 06:31:53.091421 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-16 06:31:53.091432 | orchestrator | Thursday 16 April 2026 06:31:43 +0000 (0:00:08.192) 0:04:19.882 ******** 2026-04-16 06:31:53.091443 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:31:53.091454 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:31:53.091464 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:31:53.091475 | orchestrator | 2026-04-16 06:31:53.091486 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:31:53.091497 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:31:53.091509 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:31:53.091520 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:31:53.091531 | orchestrator | 2026-04-16 06:31:53.091541 | orchestrator | 2026-04-16 06:31:53.091552 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:31:53.091563 | orchestrator | Thursday 16 April 2026 06:31:53 +0000 (0:00:09.834) 0:04:29.717 ******** 2026-04-16 06:31:53.091574 | orchestrator | =============================================================================== 2026-04-16 06:31:53.091591 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.02s 2026-04-16 06:31:53.091602 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.61s 2026-04-16 06:31:53.091612 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.53s 2026-04-16 06:31:53.091623 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 15.36s 2026-04-16 06:31:53.091634 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.15s 2026-04-16 06:31:53.091644 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.83s 2026-04-16 06:31:53.091655 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.21s 2026-04-16 06:31:53.091665 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.29s 2026-04-16 06:31:53.091682 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.19s 2026-04-16 06:31:53.091693 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.47s 2026-04-16 06:31:53.091703 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.24s 2026-04-16 06:31:53.091714 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.36s 2026-04-16 06:31:53.091725 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 5.98s 2026-04-16 06:31:53.091777 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.53s 2026-04-16 06:31:53.091796 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.23s 2026-04-16 06:31:53.385433 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.06s 2026-04-16 06:31:53.385522 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.98s 2026-04-16 06:31:53.385536 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.98s 2026-04-16 06:31:53.385548 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 4.90s 2026-04-16 06:31:53.385559 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 4.86s 2026-04-16 06:31:55.653789 | orchestrator | 2026-04-16 06:31:55 | INFO  | Task 84c01769-b30a-4a93-85a9-6f2bc2e1e531 (ceilometer) was prepared for execution. 2026-04-16 06:31:55.653842 | orchestrator | 2026-04-16 06:31:55 | INFO  | It takes a moment until task 84c01769-b30a-4a93-85a9-6f2bc2e1e531 (ceilometer) has been started and output is visible here. 2026-04-16 06:32:18.227949 | orchestrator | 2026-04-16 06:32:18.228051 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:32:18.228069 | orchestrator | 2026-04-16 06:32:18.228083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:32:18.228097 | orchestrator | Thursday 16 April 2026 06:31:59 +0000 (0:00:00.258) 0:00:00.258 ******** 2026-04-16 06:32:18.228110 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:32:18.228125 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:32:18.228138 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:32:18.228151 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:32:18.228160 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:32:18.228168 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:32:18.228176 | orchestrator | 2026-04-16 06:32:18.228184 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:32:18.228192 | orchestrator | Thursday 16 April 2026 06:32:00 +0000 (0:00:00.624) 0:00:00.883 ******** 2026-04-16 06:32:18.228201 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-16 06:32:18.228210 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-16 06:32:18.228218 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-16 06:32:18.228226 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-16 06:32:18.228234 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-16 06:32:18.228242 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-16 06:32:18.228250 | orchestrator | 2026-04-16 06:32:18.228257 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-16 06:32:18.228265 | orchestrator | 2026-04-16 06:32:18.228284 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-16 06:32:18.228292 | orchestrator | Thursday 16 April 2026 06:32:00 +0000 (0:00:00.522) 0:00:01.405 ******** 2026-04-16 06:32:18.228302 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:32:18.228311 | orchestrator | 2026-04-16 06:32:18.228320 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-04-16 06:32:18.228328 | orchestrator | Thursday 16 April 2026 06:32:01 +0000 (0:00:01.005) 0:00:02.411 ******** 2026-04-16 06:32:18.228356 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:18.228364 | orchestrator | 2026-04-16 06:32:18.228372 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-04-16 06:32:18.228380 | orchestrator | Thursday 16 April 2026 06:32:01 +0000 (0:00:00.103) 0:00:02.514 ******** 2026-04-16 06:32:18.228388 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:18.228396 | orchestrator | 2026-04-16 06:32:18.228404 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-04-16 06:32:18.228412 | orchestrator | Thursday 16 April 2026 06:32:02 +0000 (0:00:00.105) 0:00:02.620 ******** 2026-04-16 06:32:18.228420 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:32:18.228427 | orchestrator | 2026-04-16 06:32:18.228435 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-04-16 06:32:18.228443 | orchestrator | Thursday 16 April 2026 06:32:05 +0000 (0:00:03.579) 0:00:06.199 ******** 2026-04-16 06:32:18.228464 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:32:18.228472 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-04-16 06:32:18.228480 | orchestrator | 2026-04-16 06:32:18.228488 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-04-16 06:32:18.228496 | orchestrator | Thursday 16 April 2026 06:32:09 +0000 (0:00:03.811) 0:00:10.011 ******** 2026-04-16 06:32:18.228503 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:32:18.228511 | orchestrator | 2026-04-16 06:32:18.228519 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-04-16 06:32:18.228527 | orchestrator | Thursday 16 April 2026 06:32:12 +0000 (0:00:03.222) 0:00:13.234 ******** 2026-04-16 06:32:18.228535 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-04-16 06:32:18.228543 | orchestrator | 2026-04-16 06:32:18.228550 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-04-16 06:32:18.228558 | orchestrator | Thursday 16 April 2026 06:32:16 +0000 (0:00:03.908) 0:00:17.142 ******** 2026-04-16 06:32:18.228566 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:18.228575 | orchestrator | 2026-04-16 06:32:18.228583 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-16 06:32:18.228591 | orchestrator | Thursday 16 April 2026 06:32:16 +0000 (0:00:00.125) 0:00:17.267 ******** 2026-04-16 06:32:18.228602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:18.228628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:18.228638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:18.228654 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:18.228668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:18.228676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:18.228685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:18.228698 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:22.729090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:22.729219 | orchestrator | 2026-04-16 06:32:22.729236 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-16 06:32:22.729250 | orchestrator | Thursday 16 April 2026 06:32:18 +0000 (0:00:01.471) 0:00:18.739 ******** 2026-04-16 06:32:22.729261 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:32:22.729273 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 06:32:22.729284 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 06:32:22.729294 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:32:22.729305 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:32:22.729315 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:32:22.729326 | orchestrator | 2026-04-16 06:32:22.729337 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-16 06:32:22.729349 | orchestrator | Thursday 16 April 2026 06:32:19 +0000 (0:00:01.602) 0:00:20.341 ******** 2026-04-16 06:32:22.729360 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:32:22.729371 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:32:22.729382 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:32:22.729392 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:32:22.729403 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:32:22.729413 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:32:22.729424 | orchestrator | 2026-04-16 06:32:22.729434 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-16 06:32:22.729446 | orchestrator | Thursday 16 April 2026 06:32:20 +0000 (0:00:00.597) 0:00:20.939 ******** 2026-04-16 06:32:22.729457 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:22.729468 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:22.729478 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:22.729489 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:22.729499 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:22.729510 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:22.729521 | orchestrator | 2026-04-16 06:32:22.729531 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-16 06:32:22.729543 | orchestrator | Thursday 16 April 2026 06:32:21 +0000 (0:00:00.740) 0:00:21.679 ******** 2026-04-16 06:32:22.729554 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:32:22.729564 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:32:22.729575 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:32:22.729585 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:32:22.729596 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:32:22.729607 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:32:22.729617 | orchestrator | 2026-04-16 06:32:22.729665 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-16 06:32:22.729677 | orchestrator | Thursday 16 April 2026 06:32:21 +0000 (0:00:00.566) 0:00:22.246 ******** 2026-04-16 06:32:22.729689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:22.729702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:22.729748 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:22.729780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:22.729793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:22.729804 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:22.729816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:22.729833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:22.729845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:22.729857 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:22.729875 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:22.729887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:22.729898 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:22.729916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.088443 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:27.088574 | orchestrator | 2026-04-16 06:32:27.088598 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-16 06:32:27.088618 | orchestrator | Thursday 16 April 2026 06:32:22 +0000 (0:00:00.995) 0:00:23.241 ******** 2026-04-16 06:32:27.088640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.088660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:27.088678 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:27.088715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.088825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:27.088871 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:27.088891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.088910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:27.088929 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:27.088971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.088991 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:27.089010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.089028 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:27.089055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:27.089083 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:27.089102 | orchestrator | 2026-04-16 06:32:27.089122 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-16 06:32:27.089142 | orchestrator | Thursday 16 April 2026 06:32:23 +0000 (0:00:00.834) 0:00:24.076 ******** 2026-04-16 06:32:27.089161 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:32:27.089178 | orchestrator | 2026-04-16 06:32:27.089195 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-16 06:32:27.089214 | orchestrator | Thursday 16 April 2026 06:32:24 +0000 (0:00:00.671) 0:00:24.747 ******** 2026-04-16 06:32:27.089232 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:32:27.089250 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:32:27.089267 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:32:27.089284 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:32:27.089300 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:32:27.089316 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:32:27.089333 | orchestrator | 2026-04-16 06:32:27.089351 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-16 06:32:27.089368 | orchestrator | Thursday 16 April 2026 06:32:24 +0000 (0:00:00.745) 0:00:25.493 ******** 2026-04-16 06:32:27.089385 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:32:27.089401 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:32:27.089418 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:32:27.089435 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:32:27.089452 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:32:27.089469 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:32:27.089486 | orchestrator | 2026-04-16 06:32:27.089502 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-16 06:32:27.089519 | orchestrator | Thursday 16 April 2026 06:32:25 +0000 (0:00:00.871) 0:00:26.364 ******** 2026-04-16 06:32:27.089537 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:27.089555 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:27.089571 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:27.089588 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:27.089604 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:27.089622 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:27.089638 | orchestrator | 2026-04-16 06:32:27.089657 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-16 06:32:27.089674 | orchestrator | Thursday 16 April 2026 06:32:26 +0000 (0:00:00.693) 0:00:27.058 ******** 2026-04-16 06:32:27.089690 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:27.089707 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:27.089745 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:27.089764 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:27.089779 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:27.089795 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:27.089812 | orchestrator | 2026-04-16 06:32:31.774424 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-16 06:32:31.774522 | orchestrator | Thursday 16 April 2026 06:32:27 +0000 (0:00:00.548) 0:00:27.607 ******** 2026-04-16 06:32:31.774538 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:32:31.774551 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 06:32:31.774563 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 06:32:31.774575 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:32:31.774586 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:32:31.774598 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:32:31.774609 | orchestrator | 2026-04-16 06:32:31.774646 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-16 06:32:31.774687 | orchestrator | Thursday 16 April 2026 06:32:28 +0000 (0:00:01.383) 0:00:28.990 ******** 2026-04-16 06:32:31.774702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:31.774787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:31.774801 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:31.774813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:31.774825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:31.774837 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:31.774848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:31.774880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:31.774901 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:31.774914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:31.774926 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:31.774943 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:31.774957 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:31.774972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:31.774985 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:31.774997 | orchestrator | 2026-04-16 06:32:31.775010 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-16 06:32:31.775023 | orchestrator | Thursday 16 April 2026 06:32:29 +0000 (0:00:00.777) 0:00:29.767 ******** 2026-04-16 06:32:31.775036 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:31.775049 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:31.775062 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:31.775074 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:31.775087 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:31.775099 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:31.775112 | orchestrator | 2026-04-16 06:32:31.775125 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-16 06:32:31.775138 | orchestrator | Thursday 16 April 2026 06:32:29 +0000 (0:00:00.752) 0:00:30.520 ******** 2026-04-16 06:32:31.775151 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:32:31.775163 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 06:32:31.775175 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 06:32:31.775188 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:32:31.775200 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:32:31.775213 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:32:31.775232 | orchestrator | 2026-04-16 06:32:31.775245 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-16 06:32:31.775257 | orchestrator | Thursday 16 April 2026 06:32:31 +0000 (0:00:01.360) 0:00:31.880 ******** 2026-04-16 06:32:31.775279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.310313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:37.310447 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:37.310468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.310497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:37.310520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.310533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:37.310568 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:37.310582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.310594 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:37.310606 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:37.310636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.310649 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:37.310666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.310678 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:37.310690 | orchestrator | 2026-04-16 06:32:37.310701 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-16 06:32:37.310714 | orchestrator | Thursday 16 April 2026 06:32:32 +0000 (0:00:01.018) 0:00:32.899 ******** 2026-04-16 06:32:37.310761 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:37.310773 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:37.310784 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:37.310794 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:37.310805 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:37.310815 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:37.310826 | orchestrator | 2026-04-16 06:32:37.310839 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-16 06:32:37.310852 | orchestrator | Thursday 16 April 2026 06:32:33 +0000 (0:00:00.717) 0:00:33.616 ******** 2026-04-16 06:32:37.310864 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:37.310878 | orchestrator | 2026-04-16 06:32:37.310891 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-16 06:32:37.310904 | orchestrator | Thursday 16 April 2026 06:32:33 +0000 (0:00:00.144) 0:00:33.761 ******** 2026-04-16 06:32:37.310916 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:37.310928 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:37.310949 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:37.310962 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:37.310974 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:37.310985 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:37.310997 | orchestrator | 2026-04-16 06:32:37.311010 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-16 06:32:37.311023 | orchestrator | Thursday 16 April 2026 06:32:33 +0000 (0:00:00.584) 0:00:34.346 ******** 2026-04-16 06:32:37.311037 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:32:37.311051 | orchestrator | 2026-04-16 06:32:37.311063 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-16 06:32:37.311076 | orchestrator | Thursday 16 April 2026 06:32:35 +0000 (0:00:01.233) 0:00:35.580 ******** 2026-04-16 06:32:37.311089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:37.311113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:37.814874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:37.814987 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:37.815006 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:37.815040 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:37.815053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:37.815065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:37.815095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:37.815108 | orchestrator | 2026-04-16 06:32:37.815121 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-16 06:32:37.815134 | orchestrator | Thursday 16 April 2026 06:32:37 +0000 (0:00:02.243) 0:00:37.824 ******** 2026-04-16 06:32:37.815152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.815164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:37.815185 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:37.815198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.815209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:37.815221 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:37.815232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:37.815251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:39.659775 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:39.659859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.659904 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:39.659913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.659921 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:39.659930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.659937 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:39.659945 | orchestrator | 2026-04-16 06:32:39.659953 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-16 06:32:39.659962 | orchestrator | Thursday 16 April 2026 06:32:38 +0000 (0:00:00.837) 0:00:38.661 ******** 2026-04-16 06:32:39.659970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.659980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:39.660003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.660016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:39.660032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.660039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:39.660047 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:39.660055 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:39.660062 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:39.660070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.660078 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:39.660085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:39.660093 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:39.660107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:46.748467 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:46.748562 | orchestrator | 2026-04-16 06:32:46.748573 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-16 06:32:46.748583 | orchestrator | Thursday 16 April 2026 06:32:39 +0000 (0:00:01.511) 0:00:40.173 ******** 2026-04-16 06:32:46.748604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748643 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748688 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:46.748711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:46.748765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:46.748773 | orchestrator | 2026-04-16 06:32:46.748781 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-16 06:32:46.748788 | orchestrator | Thursday 16 April 2026 06:32:42 +0000 (0:00:02.418) 0:00:42.591 ******** 2026-04-16 06:32:46.748796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:46.748824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.820257 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.820393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.820411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.820423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:55.820434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:55.820467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:55.820478 | orchestrator | 2026-04-16 06:32:55.820490 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-16 06:32:55.820519 | orchestrator | Thursday 16 April 2026 06:32:46 +0000 (0:00:04.673) 0:00:47.265 ******** 2026-04-16 06:32:55.820529 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:32:55.820539 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 06:32:55.820549 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 06:32:55.820558 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:32:55.820575 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:32:55.820585 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:32:55.820595 | orchestrator | 2026-04-16 06:32:55.820604 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-16 06:32:55.820614 | orchestrator | Thursday 16 April 2026 06:32:48 +0000 (0:00:01.520) 0:00:48.786 ******** 2026-04-16 06:32:55.820623 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:55.820633 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:55.820642 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:55.820652 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:55.820661 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:55.820670 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:55.820680 | orchestrator | 2026-04-16 06:32:55.820689 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-16 06:32:55.820699 | orchestrator | Thursday 16 April 2026 06:32:48 +0000 (0:00:00.584) 0:00:49.370 ******** 2026-04-16 06:32:55.820709 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:55.820741 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:55.820751 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:55.820762 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:32:55.820773 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:32:55.820798 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:32:55.820810 | orchestrator | 2026-04-16 06:32:55.820821 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-16 06:32:55.820844 | orchestrator | Thursday 16 April 2026 06:32:50 +0000 (0:00:01.607) 0:00:50.978 ******** 2026-04-16 06:32:55.820855 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:55.820866 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:55.820877 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:55.820888 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:32:55.820899 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:32:55.820910 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:32:55.820921 | orchestrator | 2026-04-16 06:32:55.820932 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-16 06:32:55.820943 | orchestrator | Thursday 16 April 2026 06:32:51 +0000 (0:00:01.398) 0:00:52.376 ******** 2026-04-16 06:32:55.820954 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:32:55.820973 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 06:32:55.820985 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 06:32:55.820994 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:32:55.821004 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:32:55.821014 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:32:55.821023 | orchestrator | 2026-04-16 06:32:55.821033 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-16 06:32:55.821043 | orchestrator | Thursday 16 April 2026 06:32:53 +0000 (0:00:01.443) 0:00:53.820 ******** 2026-04-16 06:32:55.821053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.821067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.821077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:55.821100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:56.640938 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:56.641077 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:32:56.641104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:56.641117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:56.641127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:32:56.641138 | orchestrator | 2026-04-16 06:32:56.641149 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-16 06:32:56.641161 | orchestrator | Thursday 16 April 2026 06:32:55 +0000 (0:00:02.511) 0:00:56.332 ******** 2026-04-16 06:32:56.641185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:56.641214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:56.641233 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:56.641244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:56.641255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:56.641265 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:56.641275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:56.641285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:56.641300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:56.641310 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:56.641321 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:56.641346 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.923497 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:59.923638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.923658 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:59.923671 | orchestrator | 2026-04-16 06:32:59.923683 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-16 06:32:59.923696 | orchestrator | Thursday 16 April 2026 06:32:56 +0000 (0:00:00.828) 0:00:57.160 ******** 2026-04-16 06:32:59.923707 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:59.923784 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:59.923797 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:59.923808 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:59.923818 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:59.923830 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:59.923841 | orchestrator | 2026-04-16 06:32:59.923853 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-16 06:32:59.923864 | orchestrator | Thursday 16 April 2026 06:32:57 +0000 (0:00:00.756) 0:00:57.916 ******** 2026-04-16 06:32:59.923877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.923893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:59.923906 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:32:59.923939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.923980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:59.923992 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:32:59.924029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.924043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 06:32:59.924057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.924069 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:32:59.924082 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:32:59.924095 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.924108 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:32:59.924127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-04-16 06:32:59.924148 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:32:59.924161 | orchestrator | 2026-04-16 06:32:59.924174 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-04-16 06:32:59.924186 | orchestrator | Thursday 16 April 2026 06:32:58 +0000 (0:00:00.859) 0:00:58.776 ******** 2026-04-16 06:32:59.924208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:33:37.192648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:33:37.192813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 06:33:37.192839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:33:37.192869 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:33:37.192933 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-04-16 06:33:37.192953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:33:37.192992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:33:37.193009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 06:33:37.193026 | orchestrator | 2026-04-16 06:33:37.193045 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-16 06:33:37.193064 | orchestrator | Thursday 16 April 2026 06:32:59 +0000 (0:00:01.662) 0:01:00.438 ******** 2026-04-16 06:33:37.193080 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:33:37.193099 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:33:37.193116 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:33:37.193157 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:33:37.193190 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:33:37.193206 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:33:37.193224 | orchestrator | 2026-04-16 06:33:37.193243 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-16 06:33:37.193263 | orchestrator | Thursday 16 April 2026 06:33:00 +0000 (0:00:00.584) 0:01:01.023 ******** 2026-04-16 06:33:37.193281 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:33:37.193311 | orchestrator | 2026-04-16 06:33:37.193328 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 06:33:37.193340 | orchestrator | Thursday 16 April 2026 06:33:05 +0000 (0:00:04.891) 0:01:05.914 ******** 2026-04-16 06:33:37.193351 | orchestrator | 2026-04-16 06:33:37.193362 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 06:33:37.193374 | orchestrator | Thursday 16 April 2026 06:33:05 +0000 (0:00:00.072) 0:01:05.986 ******** 2026-04-16 06:33:37.193385 | orchestrator | 2026-04-16 06:33:37.193396 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 06:33:37.193408 | orchestrator | Thursday 16 April 2026 06:33:05 +0000 (0:00:00.070) 0:01:06.057 ******** 2026-04-16 06:33:37.193419 | orchestrator | 2026-04-16 06:33:37.193430 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 06:33:37.193441 | orchestrator | Thursday 16 April 2026 06:33:05 +0000 (0:00:00.270) 0:01:06.327 ******** 2026-04-16 06:33:37.193453 | orchestrator | 2026-04-16 06:33:37.193464 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 06:33:37.193475 | orchestrator | Thursday 16 April 2026 06:33:05 +0000 (0:00:00.072) 0:01:06.400 ******** 2026-04-16 06:33:37.193486 | orchestrator | 2026-04-16 06:33:37.193497 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 06:33:37.193508 | orchestrator | Thursday 16 April 2026 06:33:05 +0000 (0:00:00.067) 0:01:06.467 ******** 2026-04-16 06:33:37.193519 | orchestrator | 2026-04-16 06:33:37.193529 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-16 06:33:37.193545 | orchestrator | Thursday 16 April 2026 06:33:06 +0000 (0:00:00.076) 0:01:06.543 ******** 2026-04-16 06:33:37.193555 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:33:37.193565 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:33:37.193574 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:33:37.193584 | orchestrator | 2026-04-16 06:33:37.193593 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-16 06:33:37.193603 | orchestrator | Thursday 16 April 2026 06:33:16 +0000 (0:00:10.279) 0:01:16.823 ******** 2026-04-16 06:33:37.193612 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:33:37.193622 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:33:37.193631 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:33:37.193641 | orchestrator | 2026-04-16 06:33:37.193651 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-16 06:33:37.193661 | orchestrator | Thursday 16 April 2026 06:33:25 +0000 (0:00:09.525) 0:01:26.348 ******** 2026-04-16 06:33:37.193670 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:33:37.193680 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:33:37.193689 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:33:37.193699 | orchestrator | 2026-04-16 06:33:37.193752 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:33:37.193777 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-16 06:33:37.193789 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 06:33:37.193810 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 06:33:37.597481 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 06:33:37.597565 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 06:33:37.597574 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 06:33:37.597603 | orchestrator | 2026-04-16 06:33:37.597611 | orchestrator | 2026-04-16 06:33:37.597618 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:33:37.597627 | orchestrator | Thursday 16 April 2026 06:33:37 +0000 (0:00:11.356) 0:01:37.704 ******** 2026-04-16 06:33:37.597634 | orchestrator | =============================================================================== 2026-04-16 06:33:37.597641 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.36s 2026-04-16 06:33:37.597648 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.28s 2026-04-16 06:33:37.597654 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.53s 2026-04-16 06:33:37.597661 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.89s 2026-04-16 06:33:37.597668 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.67s 2026-04-16 06:33:37.597674 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.91s 2026-04-16 06:33:37.597681 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.81s 2026-04-16 06:33:37.597688 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.58s 2026-04-16 06:33:37.597694 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.22s 2026-04-16 06:33:37.597701 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.51s 2026-04-16 06:33:37.597755 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.42s 2026-04-16 06:33:37.597764 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.24s 2026-04-16 06:33:37.597771 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.66s 2026-04-16 06:33:37.597778 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.61s 2026-04-16 06:33:37.597785 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.60s 2026-04-16 06:33:37.597792 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.52s 2026-04-16 06:33:37.597799 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.51s 2026-04-16 06:33:37.597805 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.47s 2026-04-16 06:33:37.597812 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.44s 2026-04-16 06:33:37.597819 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.40s 2026-04-16 06:33:39.861824 | orchestrator | 2026-04-16 06:33:39 | INFO  | Task 75606a1b-3f9d-43af-abe4-7a0c32428f1e (aodh) was prepared for execution. 2026-04-16 06:33:39.861924 | orchestrator | 2026-04-16 06:33:39 | INFO  | It takes a moment until task 75606a1b-3f9d-43af-abe4-7a0c32428f1e (aodh) has been started and output is visible here. 2026-04-16 06:34:10.522646 | orchestrator | 2026-04-16 06:34:10.522813 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:34:10.522832 | orchestrator | 2026-04-16 06:34:10.522844 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:34:10.522872 | orchestrator | Thursday 16 April 2026 06:33:43 +0000 (0:00:00.251) 0:00:00.251 ******** 2026-04-16 06:34:10.522884 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:34:10.522896 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:34:10.522906 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:34:10.522917 | orchestrator | 2026-04-16 06:34:10.522928 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:34:10.522939 | orchestrator | Thursday 16 April 2026 06:33:44 +0000 (0:00:00.253) 0:00:00.504 ******** 2026-04-16 06:34:10.522950 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-16 06:34:10.522961 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-16 06:34:10.522972 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-16 06:34:10.522982 | orchestrator | 2026-04-16 06:34:10.523016 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-16 06:34:10.523028 | orchestrator | 2026-04-16 06:34:10.523039 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-16 06:34:10.523049 | orchestrator | Thursday 16 April 2026 06:33:44 +0000 (0:00:00.305) 0:00:00.810 ******** 2026-04-16 06:34:10.523060 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:34:10.523071 | orchestrator | 2026-04-16 06:34:10.523083 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-04-16 06:34:10.523094 | orchestrator | Thursday 16 April 2026 06:33:44 +0000 (0:00:00.402) 0:00:01.213 ******** 2026-04-16 06:34:10.523105 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-04-16 06:34:10.523116 | orchestrator | 2026-04-16 06:34:10.523126 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-04-16 06:34:10.523137 | orchestrator | Thursday 16 April 2026 06:33:48 +0000 (0:00:03.250) 0:00:04.464 ******** 2026-04-16 06:34:10.523147 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-04-16 06:34:10.523158 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-04-16 06:34:10.523169 | orchestrator | 2026-04-16 06:34:10.523181 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-04-16 06:34:10.523193 | orchestrator | Thursday 16 April 2026 06:33:54 +0000 (0:00:06.301) 0:00:10.765 ******** 2026-04-16 06:34:10.523206 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:34:10.523219 | orchestrator | 2026-04-16 06:34:10.523231 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-04-16 06:34:10.523245 | orchestrator | Thursday 16 April 2026 06:33:57 +0000 (0:00:03.311) 0:00:14.077 ******** 2026-04-16 06:34:10.523257 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:34:10.523269 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-04-16 06:34:10.523282 | orchestrator | 2026-04-16 06:34:10.523300 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-04-16 06:34:10.523318 | orchestrator | Thursday 16 April 2026 06:34:01 +0000 (0:00:03.854) 0:00:17.932 ******** 2026-04-16 06:34:10.523336 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:34:10.523355 | orchestrator | 2026-04-16 06:34:10.523374 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-04-16 06:34:10.523392 | orchestrator | Thursday 16 April 2026 06:34:04 +0000 (0:00:03.250) 0:00:21.182 ******** 2026-04-16 06:34:10.523410 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-04-16 06:34:10.523429 | orchestrator | 2026-04-16 06:34:10.523447 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-16 06:34:10.523466 | orchestrator | Thursday 16 April 2026 06:34:08 +0000 (0:00:03.749) 0:00:24.932 ******** 2026-04-16 06:34:10.523489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:10.523546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:10.523573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:10.523586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:10.523599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:10.523611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:10.523622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:10.523648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:11.820413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:11.820519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:11.820535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:11.820547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:11.820559 | orchestrator | 2026-04-16 06:34:11.820572 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-16 06:34:11.820585 | orchestrator | Thursday 16 April 2026 06:34:10 +0000 (0:00:01.969) 0:00:26.901 ******** 2026-04-16 06:34:11.820596 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:34:11.820608 | orchestrator | 2026-04-16 06:34:11.820619 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-16 06:34:11.820629 | orchestrator | Thursday 16 April 2026 06:34:10 +0000 (0:00:00.135) 0:00:27.036 ******** 2026-04-16 06:34:11.820640 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:34:11.820651 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:34:11.820662 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:34:11.820672 | orchestrator | 2026-04-16 06:34:11.820683 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-16 06:34:11.820694 | orchestrator | Thursday 16 April 2026 06:34:11 +0000 (0:00:00.546) 0:00:27.583 ******** 2026-04-16 06:34:11.820791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:11.820832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:11.820845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:11.820857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:11.820868 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:34:11.820880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:11.820893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:11.820923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:11.820950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:16.652514 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:34:16.652631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:16.652652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:16.652665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:16.652677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:16.652785 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:34:16.652799 | orchestrator | 2026-04-16 06:34:16.652811 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-16 06:34:16.652824 | orchestrator | Thursday 16 April 2026 06:34:11 +0000 (0:00:00.621) 0:00:28.205 ******** 2026-04-16 06:34:16.652835 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:34:16.652847 | orchestrator | 2026-04-16 06:34:16.652859 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-16 06:34:16.652869 | orchestrator | Thursday 16 April 2026 06:34:12 +0000 (0:00:00.700) 0:00:28.905 ******** 2026-04-16 06:34:16.652896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:16.652927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:16.652940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:16.652952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:16.652973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:16.652985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:16.653002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:16.653021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:17.314818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:17.314916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:17.314931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:17.314970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:17.314983 | orchestrator | 2026-04-16 06:34:17.314997 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-16 06:34:17.315010 | orchestrator | Thursday 16 April 2026 06:34:16 +0000 (0:00:04.132) 0:00:33.038 ******** 2026-04-16 06:34:17.315024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:17.315076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:17.315108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:17.315120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:17.315132 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:34:17.315152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:17.315164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:17.315176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:17.315193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:17.315205 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:34:17.315224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:18.405347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:18.405484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:18.405499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:18.405510 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:34:18.405522 | orchestrator | 2026-04-16 06:34:18.405533 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-16 06:34:18.405545 | orchestrator | Thursday 16 April 2026 06:34:17 +0000 (0:00:00.658) 0:00:33.697 ******** 2026-04-16 06:34:18.405556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:18.405580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:18.405591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:18.405619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:18.405637 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:34:18.405648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:18.405658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:18.405668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:18.405684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:18.405694 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:34:18.405758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-04-16 06:34:22.411393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 06:34:22.411501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 06:34:22.411517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 06:34:22.411530 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:34:22.411543 | orchestrator | 2026-04-16 06:34:22.411555 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-16 06:34:22.411568 | orchestrator | Thursday 16 April 2026 06:34:18 +0000 (0:00:01.093) 0:00:34.790 ******** 2026-04-16 06:34:22.411580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:22.411610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:22.411665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:22.411679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:22.411690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:22.411702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:22.411744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:22.411761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:22.411773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:22.411800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525778 | orchestrator | 2026-04-16 06:34:30.525786 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-16 06:34:30.525794 | orchestrator | Thursday 16 April 2026 06:34:22 +0000 (0:00:03.999) 0:00:38.790 ******** 2026-04-16 06:34:30.525802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:30.525821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:30.525843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:30.525863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:30.525920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567200 | orchestrator | 2026-04-16 06:34:35.567217 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-16 06:34:35.567231 | orchestrator | Thursday 16 April 2026 06:34:30 +0000 (0:00:08.122) 0:00:46.913 ******** 2026-04-16 06:34:35.567242 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:34:35.567255 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:34:35.567266 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:34:35.567276 | orchestrator | 2026-04-16 06:34:35.567288 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-04-16 06:34:35.567299 | orchestrator | Thursday 16 April 2026 06:34:32 +0000 (0:00:01.751) 0:00:48.665 ******** 2026-04-16 06:34:35.567312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:35.567363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:35.567376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-04-16 06:34:35.567406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:34:35.567510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:35:24.812429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 06:35:24.812547 | orchestrator | 2026-04-16 06:35:24.812565 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-16 06:35:24.812579 | orchestrator | Thursday 16 April 2026 06:34:35 +0000 (0:00:03.288) 0:00:51.953 ******** 2026-04-16 06:35:24.812590 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:35:24.812602 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:35:24.812613 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:35:24.812624 | orchestrator | 2026-04-16 06:35:24.812635 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-04-16 06:35:24.812672 | orchestrator | Thursday 16 April 2026 06:34:35 +0000 (0:00:00.304) 0:00:52.258 ******** 2026-04-16 06:35:24.812684 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.812695 | orchestrator | 2026-04-16 06:35:24.812757 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-04-16 06:35:24.812770 | orchestrator | Thursday 16 April 2026 06:34:37 +0000 (0:00:02.076) 0:00:54.335 ******** 2026-04-16 06:35:24.812781 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.812791 | orchestrator | 2026-04-16 06:35:24.812802 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-16 06:35:24.812813 | orchestrator | Thursday 16 April 2026 06:34:40 +0000 (0:00:02.343) 0:00:56.678 ******** 2026-04-16 06:35:24.812824 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.812835 | orchestrator | 2026-04-16 06:35:24.812846 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-16 06:35:24.812856 | orchestrator | Thursday 16 April 2026 06:34:53 +0000 (0:00:12.798) 0:01:09.476 ******** 2026-04-16 06:35:24.812867 | orchestrator | 2026-04-16 06:35:24.812877 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-16 06:35:24.812904 | orchestrator | Thursday 16 April 2026 06:34:53 +0000 (0:00:00.076) 0:01:09.553 ******** 2026-04-16 06:35:24.812924 | orchestrator | 2026-04-16 06:35:24.812942 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-16 06:35:24.812961 | orchestrator | Thursday 16 April 2026 06:34:53 +0000 (0:00:00.069) 0:01:09.622 ******** 2026-04-16 06:35:24.812981 | orchestrator | 2026-04-16 06:35:24.813001 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-16 06:35:24.813021 | orchestrator | Thursday 16 April 2026 06:34:53 +0000 (0:00:00.239) 0:01:09.862 ******** 2026-04-16 06:35:24.813040 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.813060 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:35:24.813074 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:35:24.813087 | orchestrator | 2026-04-16 06:35:24.813100 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-16 06:35:24.813112 | orchestrator | Thursday 16 April 2026 06:34:58 +0000 (0:00:05.421) 0:01:15.283 ******** 2026-04-16 06:35:24.813124 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.813137 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:35:24.813154 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:35:24.813173 | orchestrator | 2026-04-16 06:35:24.813192 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-16 06:35:24.813210 | orchestrator | Thursday 16 April 2026 06:35:03 +0000 (0:00:04.950) 0:01:20.233 ******** 2026-04-16 06:35:24.813229 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.813246 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:35:24.813265 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:35:24.813283 | orchestrator | 2026-04-16 06:35:24.813302 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-16 06:35:24.813321 | orchestrator | Thursday 16 April 2026 06:35:14 +0000 (0:00:10.340) 0:01:30.574 ******** 2026-04-16 06:35:24.813339 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:35:24.813358 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:35:24.813377 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:35:24.813396 | orchestrator | 2026-04-16 06:35:24.813414 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:35:24.813430 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:35:24.813443 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:35:24.813453 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:35:24.813478 | orchestrator | 2026-04-16 06:35:24.813489 | orchestrator | 2026-04-16 06:35:24.813499 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:35:24.813510 | orchestrator | Thursday 16 April 2026 06:35:24 +0000 (0:00:10.282) 0:01:40.856 ******** 2026-04-16 06:35:24.813521 | orchestrator | =============================================================================== 2026-04-16 06:35:24.813532 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.80s 2026-04-16 06:35:24.813542 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 10.34s 2026-04-16 06:35:24.813578 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.28s 2026-04-16 06:35:24.813604 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.12s 2026-04-16 06:35:24.813626 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.30s 2026-04-16 06:35:24.813643 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.42s 2026-04-16 06:35:24.813661 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 4.95s 2026-04-16 06:35:24.813808 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.13s 2026-04-16 06:35:24.813836 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.00s 2026-04-16 06:35:24.813856 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.85s 2026-04-16 06:35:24.813873 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.75s 2026-04-16 06:35:24.813893 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.31s 2026-04-16 06:35:24.813911 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.29s 2026-04-16 06:35:24.813930 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.25s 2026-04-16 06:35:24.813947 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.25s 2026-04-16 06:35:24.813966 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.34s 2026-04-16 06:35:24.813985 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.08s 2026-04-16 06:35:24.814004 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.97s 2026-04-16 06:35:24.814089 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.75s 2026-04-16 06:35:24.814111 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.09s 2026-04-16 06:35:27.061135 | orchestrator | 2026-04-16 06:35:27 | INFO  | Task 43ee04c9-f6bb-45e6-93e4-b68eb13b7b02 (kolla-ceph-rgw) was prepared for execution. 2026-04-16 06:35:27.061225 | orchestrator | 2026-04-16 06:35:27 | INFO  | It takes a moment until task 43ee04c9-f6bb-45e6-93e4-b68eb13b7b02 (kolla-ceph-rgw) has been started and output is visible here. 2026-04-16 06:36:00.451199 | orchestrator | 2026-04-16 06:36:00.451332 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:36:00.451349 | orchestrator | 2026-04-16 06:36:00.451362 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:36:00.451374 | orchestrator | Thursday 16 April 2026 06:35:31 +0000 (0:00:00.265) 0:00:00.265 ******** 2026-04-16 06:36:00.451385 | orchestrator | ok: [testbed-manager] 2026-04-16 06:36:00.451396 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:36:00.451407 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:36:00.451418 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:36:00.451429 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:36:00.451439 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:36:00.451450 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:36:00.451460 | orchestrator | 2026-04-16 06:36:00.451471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:36:00.451482 | orchestrator | Thursday 16 April 2026 06:35:31 +0000 (0:00:00.718) 0:00:00.983 ******** 2026-04-16 06:36:00.451494 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451527 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451538 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451549 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451560 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451570 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451581 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-04-16 06:36:00.451591 | orchestrator | 2026-04-16 06:36:00.451602 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-04-16 06:36:00.451613 | orchestrator | 2026-04-16 06:36:00.451623 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-04-16 06:36:00.451634 | orchestrator | Thursday 16 April 2026 06:35:32 +0000 (0:00:00.613) 0:00:01.597 ******** 2026-04-16 06:36:00.451647 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:36:00.451659 | orchestrator | 2026-04-16 06:36:00.451670 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-04-16 06:36:00.451681 | orchestrator | Thursday 16 April 2026 06:35:33 +0000 (0:00:01.286) 0:00:02.883 ******** 2026-04-16 06:36:00.451691 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-04-16 06:36:00.451774 | orchestrator | 2026-04-16 06:36:00.451789 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-04-16 06:36:00.451802 | orchestrator | Thursday 16 April 2026 06:35:37 +0000 (0:00:03.354) 0:00:06.237 ******** 2026-04-16 06:36:00.451815 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-04-16 06:36:00.451830 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-04-16 06:36:00.451843 | orchestrator | 2026-04-16 06:36:00.451855 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-04-16 06:36:00.451868 | orchestrator | Thursday 16 April 2026 06:35:43 +0000 (0:00:05.943) 0:00:12.180 ******** 2026-04-16 06:36:00.451882 | orchestrator | ok: [testbed-manager] => (item=service) 2026-04-16 06:36:00.451894 | orchestrator | 2026-04-16 06:36:00.451907 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-04-16 06:36:00.451920 | orchestrator | Thursday 16 April 2026 06:35:46 +0000 (0:00:02.946) 0:00:15.127 ******** 2026-04-16 06:36:00.451932 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:36:00.451945 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-04-16 06:36:00.451957 | orchestrator | 2026-04-16 06:36:00.451970 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-04-16 06:36:00.451982 | orchestrator | Thursday 16 April 2026 06:35:49 +0000 (0:00:03.661) 0:00:18.789 ******** 2026-04-16 06:36:00.451995 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-04-16 06:36:00.452008 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-04-16 06:36:00.452020 | orchestrator | 2026-04-16 06:36:00.452033 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-04-16 06:36:00.452046 | orchestrator | Thursday 16 April 2026 06:35:55 +0000 (0:00:05.795) 0:00:24.584 ******** 2026-04-16 06:36:00.452059 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-04-16 06:36:00.452072 | orchestrator | 2026-04-16 06:36:00.452084 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:36:00.452097 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452111 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452131 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452143 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452154 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452182 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452200 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:00.452211 | orchestrator | 2026-04-16 06:36:00.452222 | orchestrator | 2026-04-16 06:36:00.452233 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:36:00.452244 | orchestrator | Thursday 16 April 2026 06:36:00 +0000 (0:00:04.466) 0:00:29.051 ******** 2026-04-16 06:36:00.452254 | orchestrator | =============================================================================== 2026-04-16 06:36:00.452265 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.94s 2026-04-16 06:36:00.452276 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.80s 2026-04-16 06:36:00.452286 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.47s 2026-04-16 06:36:00.452297 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.66s 2026-04-16 06:36:00.452308 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.35s 2026-04-16 06:36:00.452318 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.95s 2026-04-16 06:36:00.452329 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.29s 2026-04-16 06:36:00.452340 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-04-16 06:36:00.452350 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-04-16 06:36:02.752677 | orchestrator | 2026-04-16 06:36:02 | INFO  | Task 8c39dd4f-33d1-414d-bff2-b3d9cc9e8878 (gnocchi) was prepared for execution. 2026-04-16 06:36:02.752848 | orchestrator | 2026-04-16 06:36:02 | INFO  | It takes a moment until task 8c39dd4f-33d1-414d-bff2-b3d9cc9e8878 (gnocchi) has been started and output is visible here. 2026-04-16 06:36:07.641570 | orchestrator | 2026-04-16 06:36:07.641677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:36:07.641694 | orchestrator | 2026-04-16 06:36:07.641774 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:36:07.641787 | orchestrator | Thursday 16 April 2026 06:36:06 +0000 (0:00:00.246) 0:00:00.246 ******** 2026-04-16 06:36:07.641798 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:36:07.641810 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:36:07.641821 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:36:07.641845 | orchestrator | 2026-04-16 06:36:07.641867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:36:07.641880 | orchestrator | Thursday 16 April 2026 06:36:07 +0000 (0:00:00.299) 0:00:00.546 ******** 2026-04-16 06:36:07.641891 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-16 06:36:07.641902 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-16 06:36:07.641914 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-16 06:36:07.641925 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-16 06:36:07.641936 | orchestrator | 2026-04-16 06:36:07.641947 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-16 06:36:07.641958 | orchestrator | skipping: no hosts matched 2026-04-16 06:36:07.641971 | orchestrator | 2026-04-16 06:36:07.642008 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:36:07.642082 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:07.642097 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:07.642108 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:36:07.642121 | orchestrator | 2026-04-16 06:36:07.642134 | orchestrator | 2026-04-16 06:36:07.642147 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:36:07.642159 | orchestrator | Thursday 16 April 2026 06:36:07 +0000 (0:00:00.339) 0:00:00.885 ******** 2026-04-16 06:36:07.642173 | orchestrator | =============================================================================== 2026-04-16 06:36:07.642193 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2026-04-16 06:36:07.642211 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-04-16 06:36:09.852198 | orchestrator | 2026-04-16 06:36:09 | INFO  | Task 4f3737e2-85c0-41ed-9aa5-9d6484223760 (manila) was prepared for execution. 2026-04-16 06:36:09.852337 | orchestrator | 2026-04-16 06:36:09 | INFO  | It takes a moment until task 4f3737e2-85c0-41ed-9aa5-9d6484223760 (manila) has been started and output is visible here. 2026-04-16 06:36:49.647067 | orchestrator | 2026-04-16 06:36:49.647153 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:36:49.647163 | orchestrator | 2026-04-16 06:36:49.647170 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:36:49.647177 | orchestrator | Thursday 16 April 2026 06:36:13 +0000 (0:00:00.185) 0:00:00.185 ******** 2026-04-16 06:36:49.647183 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:36:49.647190 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:36:49.647196 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:36:49.647202 | orchestrator | 2026-04-16 06:36:49.647208 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:36:49.647214 | orchestrator | Thursday 16 April 2026 06:36:13 +0000 (0:00:00.231) 0:00:00.417 ******** 2026-04-16 06:36:49.647220 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-16 06:36:49.647238 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-16 06:36:49.647244 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-16 06:36:49.647250 | orchestrator | 2026-04-16 06:36:49.647255 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-16 06:36:49.647261 | orchestrator | 2026-04-16 06:36:49.647267 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 06:36:49.647273 | orchestrator | Thursday 16 April 2026 06:36:13 +0000 (0:00:00.284) 0:00:00.701 ******** 2026-04-16 06:36:49.647278 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:36:49.647286 | orchestrator | 2026-04-16 06:36:49.647291 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 06:36:49.647297 | orchestrator | Thursday 16 April 2026 06:36:14 +0000 (0:00:00.447) 0:00:01.149 ******** 2026-04-16 06:36:49.647303 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:36:49.647310 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:36:49.647315 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:36:49.647321 | orchestrator | 2026-04-16 06:36:49.647327 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-04-16 06:36:49.647333 | orchestrator | Thursday 16 April 2026 06:36:14 +0000 (0:00:00.346) 0:00:01.495 ******** 2026-04-16 06:36:49.647338 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-04-16 06:36:49.647344 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-04-16 06:36:49.647367 | orchestrator | 2026-04-16 06:36:49.647373 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-04-16 06:36:49.647379 | orchestrator | Thursday 16 April 2026 06:36:21 +0000 (0:00:06.270) 0:00:07.766 ******** 2026-04-16 06:36:49.647385 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-04-16 06:36:49.647391 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-04-16 06:36:49.647397 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-04-16 06:36:49.647403 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-04-16 06:36:49.647409 | orchestrator | 2026-04-16 06:36:49.647414 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-04-16 06:36:49.647420 | orchestrator | Thursday 16 April 2026 06:36:33 +0000 (0:00:12.697) 0:00:20.463 ******** 2026-04-16 06:36:49.647426 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:36:49.647432 | orchestrator | 2026-04-16 06:36:49.647438 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-04-16 06:36:49.647443 | orchestrator | Thursday 16 April 2026 06:36:36 +0000 (0:00:03.149) 0:00:23.613 ******** 2026-04-16 06:36:49.647449 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:36:49.647455 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-04-16 06:36:49.647460 | orchestrator | 2026-04-16 06:36:49.647466 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-04-16 06:36:49.647472 | orchestrator | Thursday 16 April 2026 06:36:40 +0000 (0:00:03.828) 0:00:27.442 ******** 2026-04-16 06:36:49.647477 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:36:49.647484 | orchestrator | 2026-04-16 06:36:49.647489 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-04-16 06:36:49.647495 | orchestrator | Thursday 16 April 2026 06:36:43 +0000 (0:00:03.084) 0:00:30.526 ******** 2026-04-16 06:36:49.647501 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-04-16 06:36:49.647507 | orchestrator | 2026-04-16 06:36:49.647513 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-16 06:36:49.647518 | orchestrator | Thursday 16 April 2026 06:36:47 +0000 (0:00:03.671) 0:00:34.198 ******** 2026-04-16 06:36:49.647540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:36:49.647553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:36:49.647565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:36:49.647572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:49.647579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:49.647585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:49.647597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:59.573883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:59.574098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:59.574121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:59.574134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:59.574146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:36:59.574158 | orchestrator | 2026-04-16 06:36:59.574171 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 06:36:59.574184 | orchestrator | Thursday 16 April 2026 06:36:49 +0000 (0:00:02.281) 0:00:36.480 ******** 2026-04-16 06:36:59.574195 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:36:59.574206 | orchestrator | 2026-04-16 06:36:59.574218 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-16 06:36:59.574229 | orchestrator | Thursday 16 April 2026 06:36:50 +0000 (0:00:00.564) 0:00:37.044 ******** 2026-04-16 06:36:59.574240 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:36:59.574252 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:36:59.574262 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:36:59.574273 | orchestrator | 2026-04-16 06:36:59.574284 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-16 06:36:59.574294 | orchestrator | Thursday 16 April 2026 06:36:51 +0000 (0:00:00.901) 0:00:37.946 ******** 2026-04-16 06:36:59.574307 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 06:36:59.574347 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 06:36:59.574369 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 06:36:59.574383 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 06:36:59.574396 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 06:36:59.574408 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 06:36:59.574420 | orchestrator | 2026-04-16 06:36:59.574433 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-16 06:36:59.574446 | orchestrator | Thursday 16 April 2026 06:36:52 +0000 (0:00:01.639) 0:00:39.585 ******** 2026-04-16 06:36:59.574459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 06:36:59.574472 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 06:36:59.574484 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 06:36:59.574496 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 06:36:59.574508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 06:36:59.574521 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 06:36:59.574533 | orchestrator | 2026-04-16 06:36:59.574546 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-16 06:36:59.574559 | orchestrator | Thursday 16 April 2026 06:36:54 +0000 (0:00:01.155) 0:00:40.740 ******** 2026-04-16 06:36:59.574572 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-16 06:36:59.574584 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-16 06:36:59.574597 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-16 06:36:59.574610 | orchestrator | 2026-04-16 06:36:59.574622 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-16 06:36:59.574634 | orchestrator | Thursday 16 April 2026 06:36:54 +0000 (0:00:00.636) 0:00:41.377 ******** 2026-04-16 06:36:59.574647 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:36:59.574659 | orchestrator | 2026-04-16 06:36:59.574671 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-16 06:36:59.574684 | orchestrator | Thursday 16 April 2026 06:36:54 +0000 (0:00:00.128) 0:00:41.506 ******** 2026-04-16 06:36:59.574717 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:36:59.574729 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:36:59.574740 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:36:59.574751 | orchestrator | 2026-04-16 06:36:59.574762 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 06:36:59.574780 | orchestrator | Thursday 16 April 2026 06:36:55 +0000 (0:00:00.431) 0:00:41.937 ******** 2026-04-16 06:36:59.574791 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:36:59.574802 | orchestrator | 2026-04-16 06:36:59.574813 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-16 06:36:59.574828 | orchestrator | Thursday 16 April 2026 06:36:55 +0000 (0:00:00.563) 0:00:42.501 ******** 2026-04-16 06:36:59.574865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:00.403551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:00.403646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:00.403659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:00.403885 | orchestrator | 2026-04-16 06:37:00.403896 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-16 06:37:00.403907 | orchestrator | Thursday 16 April 2026 06:36:59 +0000 (0:00:03.897) 0:00:46.399 ******** 2026-04-16 06:37:00.403928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:01.003279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003429 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:37:01.003442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:01.003454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003516 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:37:01.003527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:01.003537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:01.003577 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:37:01.003587 | orchestrator | 2026-04-16 06:37:01.003597 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-16 06:37:01.003608 | orchestrator | Thursday 16 April 2026 06:37:00 +0000 (0:00:00.824) 0:00:47.223 ******** 2026-04-16 06:37:01.003639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:05.387475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387620 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:37:05.387630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:05.387640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387711 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:37:05.387720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:05.387734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:05.387759 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:37:05.387767 | orchestrator | 2026-04-16 06:37:05.387775 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-16 06:37:05.387784 | orchestrator | Thursday 16 April 2026 06:37:01 +0000 (0:00:00.810) 0:00:48.033 ******** 2026-04-16 06:37:05.387803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:11.808908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:11.809043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:11.809062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:11.809213 | orchestrator | 2026-04-16 06:37:11.809231 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-16 06:37:11.809244 | orchestrator | Thursday 16 April 2026 06:37:05 +0000 (0:00:04.337) 0:00:52.371 ******** 2026-04-16 06:37:11.809263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:15.791081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:15.791180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:15.791195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:15.791208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:15.791237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:15.791305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:15.791326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:15.791345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:15.791399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:15.791418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:15.791435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:15.791447 | orchestrator | 2026-04-16 06:37:15.791458 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-16 06:37:15.791478 | orchestrator | Thursday 16 April 2026 06:37:11 +0000 (0:00:06.263) 0:00:58.634 ******** 2026-04-16 06:37:15.791489 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-16 06:37:15.791499 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-16 06:37:15.791509 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-16 06:37:15.791519 | orchestrator | 2026-04-16 06:37:15.791528 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-16 06:37:15.791538 | orchestrator | Thursday 16 April 2026 06:37:15 +0000 (0:00:03.378) 0:01:02.013 ******** 2026-04-16 06:37:15.791557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:18.928019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928131 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:37:18.928153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:18.928181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928217 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:37:18.928224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-04-16 06:37:18.928231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 06:37:18.928262 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:37:18.928269 | orchestrator | 2026-04-16 06:37:18.928277 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-04-16 06:37:18.928285 | orchestrator | Thursday 16 April 2026 06:37:15 +0000 (0:00:00.603) 0:01:02.616 ******** 2026-04-16 06:37:18.928298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:59.328309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:59.328430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-04-16 06:37:59.328507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 06:37:59.328733 | orchestrator | 2026-04-16 06:37:59.328747 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-04-16 06:37:59.328759 | orchestrator | Thursday 16 April 2026 06:37:19 +0000 (0:00:03.141) 0:01:05.758 ******** 2026-04-16 06:37:59.328771 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:37:59.328783 | orchestrator | 2026-04-16 06:37:59.328794 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-04-16 06:37:59.328805 | orchestrator | Thursday 16 April 2026 06:37:21 +0000 (0:00:02.077) 0:01:07.836 ******** 2026-04-16 06:37:59.328816 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:37:59.328832 | orchestrator | 2026-04-16 06:37:59.328858 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-16 06:37:59.328880 | orchestrator | Thursday 16 April 2026 06:37:23 +0000 (0:00:02.227) 0:01:10.063 ******** 2026-04-16 06:37:59.328898 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:37:59.328916 | orchestrator | 2026-04-16 06:37:59.328935 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-16 06:37:59.328954 | orchestrator | Thursday 16 April 2026 06:37:59 +0000 (0:00:35.776) 0:01:45.840 ******** 2026-04-16 06:37:59.328974 | orchestrator | 2026-04-16 06:37:59.329004 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-16 06:38:47.994170 | orchestrator | Thursday 16 April 2026 06:37:59 +0000 (0:00:00.071) 0:01:45.912 ******** 2026-04-16 06:38:47.994310 | orchestrator | 2026-04-16 06:38:47.994336 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-16 06:38:47.994357 | orchestrator | Thursday 16 April 2026 06:37:59 +0000 (0:00:00.069) 0:01:45.982 ******** 2026-04-16 06:38:47.994377 | orchestrator | 2026-04-16 06:38:47.994397 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-16 06:38:47.994416 | orchestrator | Thursday 16 April 2026 06:37:59 +0000 (0:00:00.069) 0:01:46.051 ******** 2026-04-16 06:38:47.994434 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:38:47.994453 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:38:47.994471 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:38:47.994525 | orchestrator | 2026-04-16 06:38:47.994545 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-16 06:38:47.994563 | orchestrator | Thursday 16 April 2026 06:38:13 +0000 (0:00:14.510) 0:02:00.562 ******** 2026-04-16 06:38:47.994582 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:38:47.994600 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:38:47.994617 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:38:47.994631 | orchestrator | 2026-04-16 06:38:47.994644 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-16 06:38:47.994661 | orchestrator | Thursday 16 April 2026 06:38:19 +0000 (0:00:05.615) 0:02:06.178 ******** 2026-04-16 06:38:47.994679 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:38:47.994725 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:38:47.994737 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:38:47.994750 | orchestrator | 2026-04-16 06:38:47.994764 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-16 06:38:47.994776 | orchestrator | Thursday 16 April 2026 06:38:29 +0000 (0:00:10.139) 0:02:16.317 ******** 2026-04-16 06:38:47.994789 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:38:47.994802 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:38:47.994814 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:38:47.994827 | orchestrator | 2026-04-16 06:38:47.994839 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:38:47.994853 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:38:47.994867 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:38:47.994878 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:38:47.994888 | orchestrator | 2026-04-16 06:38:47.994899 | orchestrator | 2026-04-16 06:38:47.994910 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:38:47.994921 | orchestrator | Thursday 16 April 2026 06:38:47 +0000 (0:00:18.038) 0:02:34.356 ******** 2026-04-16 06:38:47.994931 | orchestrator | =============================================================================== 2026-04-16 06:38:47.994956 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 35.78s 2026-04-16 06:38:47.994968 | orchestrator | manila : Restart manila-share container -------------------------------- 18.04s 2026-04-16 06:38:47.994983 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.51s 2026-04-16 06:38:47.994998 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.70s 2026-04-16 06:38:47.995009 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.14s 2026-04-16 06:38:47.995020 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.27s 2026-04-16 06:38:47.995031 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.26s 2026-04-16 06:38:47.995041 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.62s 2026-04-16 06:38:47.995066 | orchestrator | manila : Copying over config.json files for services -------------------- 4.34s 2026-04-16 06:38:47.995093 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.90s 2026-04-16 06:38:47.995105 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.83s 2026-04-16 06:38:47.995116 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.67s 2026-04-16 06:38:47.995127 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.38s 2026-04-16 06:38:47.995137 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.15s 2026-04-16 06:38:47.995148 | orchestrator | manila : Check manila containers ---------------------------------------- 3.14s 2026-04-16 06:38:47.995169 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.08s 2026-04-16 06:38:47.995180 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.28s 2026-04-16 06:38:47.995190 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.23s 2026-04-16 06:38:47.995201 | orchestrator | manila : Creating Manila database --------------------------------------- 2.08s 2026-04-16 06:38:47.995212 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.64s 2026-04-16 06:38:48.268501 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-04-16 06:39:00.327289 | orchestrator | 2026-04-16 06:39:00 | INFO  | Task c05c9792-4235-4c77-88ef-659f2725aefa (netdata) was prepared for execution. 2026-04-16 06:39:00.327449 | orchestrator | 2026-04-16 06:39:00 | INFO  | It takes a moment until task c05c9792-4235-4c77-88ef-659f2725aefa (netdata) has been started and output is visible here. 2026-04-16 06:40:16.234541 | orchestrator | 2026-04-16 06:40:16.234625 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:40:16.234634 | orchestrator | 2026-04-16 06:40:16.234639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:40:16.234644 | orchestrator | Thursday 16 April 2026 06:39:04 +0000 (0:00:00.231) 0:00:00.231 ******** 2026-04-16 06:40:16.234649 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-04-16 06:40:16.234654 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-04-16 06:40:16.234659 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-04-16 06:40:16.234664 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-04-16 06:40:16.234668 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-04-16 06:40:16.234673 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-04-16 06:40:16.234677 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-04-16 06:40:16.234715 | orchestrator | 2026-04-16 06:40:16.234720 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-04-16 06:40:16.234724 | orchestrator | 2026-04-16 06:40:16.234729 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-04-16 06:40:16.234733 | orchestrator | Thursday 16 April 2026 06:39:05 +0000 (0:00:00.834) 0:00:01.066 ******** 2026-04-16 06:40:16.234739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:40:16.234745 | orchestrator | 2026-04-16 06:40:16.234750 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-04-16 06:40:16.234754 | orchestrator | Thursday 16 April 2026 06:39:06 +0000 (0:00:01.231) 0:00:02.297 ******** 2026-04-16 06:40:16.234759 | orchestrator | ok: [testbed-manager] 2026-04-16 06:40:16.234764 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:40:16.234769 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:40:16.234773 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:40:16.234777 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:40:16.234781 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:40:16.234786 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:40:16.234790 | orchestrator | 2026-04-16 06:40:16.234794 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-04-16 06:40:16.234799 | orchestrator | Thursday 16 April 2026 06:39:08 +0000 (0:00:01.814) 0:00:04.111 ******** 2026-04-16 06:40:16.234803 | orchestrator | ok: [testbed-manager] 2026-04-16 06:40:16.234807 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:40:16.234812 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:40:16.234816 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:40:16.234820 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:40:16.234824 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:40:16.234828 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:40:16.234849 | orchestrator | 2026-04-16 06:40:16.234854 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-04-16 06:40:16.234868 | orchestrator | Thursday 16 April 2026 06:39:10 +0000 (0:00:02.066) 0:00:06.178 ******** 2026-04-16 06:40:16.234873 | orchestrator | changed: [testbed-manager] 2026-04-16 06:40:16.234877 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:40:16.234881 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:40:16.234886 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:40:16.234890 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:40:16.234894 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:40:16.234898 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:40:16.234903 | orchestrator | 2026-04-16 06:40:16.234907 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-04-16 06:40:16.234911 | orchestrator | Thursday 16 April 2026 06:39:12 +0000 (0:00:01.636) 0:00:07.814 ******** 2026-04-16 06:40:16.234915 | orchestrator | changed: [testbed-manager] 2026-04-16 06:40:16.234920 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:40:16.234924 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:40:16.234928 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:40:16.234932 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:40:16.234937 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:40:16.234941 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:40:16.234945 | orchestrator | 2026-04-16 06:40:16.234950 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-04-16 06:40:16.234954 | orchestrator | Thursday 16 April 2026 06:39:27 +0000 (0:00:14.864) 0:00:22.679 ******** 2026-04-16 06:40:16.234958 | orchestrator | changed: [testbed-manager] 2026-04-16 06:40:16.234963 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:40:16.234967 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:40:16.234971 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:40:16.234976 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:40:16.234980 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:40:16.234984 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:40:16.234988 | orchestrator | 2026-04-16 06:40:16.234992 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-04-16 06:40:16.234997 | orchestrator | Thursday 16 April 2026 06:39:51 +0000 (0:00:24.462) 0:00:47.141 ******** 2026-04-16 06:40:16.235002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:40:16.235008 | orchestrator | 2026-04-16 06:40:16.235013 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-04-16 06:40:16.235017 | orchestrator | Thursday 16 April 2026 06:39:53 +0000 (0:00:01.475) 0:00:48.616 ******** 2026-04-16 06:40:16.235021 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-04-16 06:40:16.235026 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-04-16 06:40:16.235030 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-04-16 06:40:16.235037 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-04-16 06:40:16.235056 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-04-16 06:40:16.235061 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-04-16 06:40:16.235066 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-04-16 06:40:16.235070 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-04-16 06:40:16.235074 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-04-16 06:40:16.235078 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-04-16 06:40:16.235082 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-04-16 06:40:16.235088 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-04-16 06:40:16.235092 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-04-16 06:40:16.235098 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-04-16 06:40:16.235107 | orchestrator | 2026-04-16 06:40:16.235112 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-04-16 06:40:16.235118 | orchestrator | Thursday 16 April 2026 06:39:56 +0000 (0:00:03.190) 0:00:51.807 ******** 2026-04-16 06:40:16.235122 | orchestrator | ok: [testbed-manager] 2026-04-16 06:40:16.235127 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:40:16.235132 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:40:16.235137 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:40:16.235142 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:40:16.235147 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:40:16.235151 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:40:16.235156 | orchestrator | 2026-04-16 06:40:16.235161 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-04-16 06:40:16.235166 | orchestrator | Thursday 16 April 2026 06:39:57 +0000 (0:00:01.176) 0:00:52.983 ******** 2026-04-16 06:40:16.235171 | orchestrator | changed: [testbed-manager] 2026-04-16 06:40:16.235176 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:40:16.235181 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:40:16.235186 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:40:16.235191 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:40:16.235195 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:40:16.235200 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:40:16.235205 | orchestrator | 2026-04-16 06:40:16.235209 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-04-16 06:40:16.235214 | orchestrator | Thursday 16 April 2026 06:39:58 +0000 (0:00:01.305) 0:00:54.289 ******** 2026-04-16 06:40:16.235220 | orchestrator | ok: [testbed-manager] 2026-04-16 06:40:16.235224 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:40:16.235229 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:40:16.235234 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:40:16.235239 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:40:16.235244 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:40:16.235248 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:40:16.235253 | orchestrator | 2026-04-16 06:40:16.235259 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-04-16 06:40:16.235264 | orchestrator | Thursday 16 April 2026 06:39:59 +0000 (0:00:01.200) 0:00:55.490 ******** 2026-04-16 06:40:16.235269 | orchestrator | ok: [testbed-manager] 2026-04-16 06:40:16.235274 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:40:16.235279 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:40:16.235283 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:40:16.235288 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:40:16.235293 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:40:16.235300 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:40:16.235305 | orchestrator | 2026-04-16 06:40:16.235310 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-04-16 06:40:16.235315 | orchestrator | Thursday 16 April 2026 06:40:01 +0000 (0:00:01.631) 0:00:57.121 ******** 2026-04-16 06:40:16.235320 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-04-16 06:40:16.235327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:40:16.235332 | orchestrator | 2026-04-16 06:40:16.235337 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-04-16 06:40:16.235342 | orchestrator | Thursday 16 April 2026 06:40:02 +0000 (0:00:01.328) 0:00:58.450 ******** 2026-04-16 06:40:16.235347 | orchestrator | changed: [testbed-manager] 2026-04-16 06:40:16.235352 | orchestrator | 2026-04-16 06:40:16.235357 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-04-16 06:40:16.235362 | orchestrator | Thursday 16 April 2026 06:40:04 +0000 (0:00:02.040) 0:01:00.491 ******** 2026-04-16 06:40:16.235366 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:40:16.235374 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:40:16.235378 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:40:16.235382 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:40:16.235387 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:40:16.235391 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:40:16.235395 | orchestrator | changed: [testbed-manager] 2026-04-16 06:40:16.235399 | orchestrator | 2026-04-16 06:40:16.235404 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:40:16.235408 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.235413 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.235417 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.235422 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.235429 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.626561 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.626663 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:40:16.626727 | orchestrator | 2026-04-16 06:40:16.626741 | orchestrator | 2026-04-16 06:40:16.626753 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:40:16.626766 | orchestrator | Thursday 16 April 2026 06:40:16 +0000 (0:00:11.312) 0:01:11.804 ******** 2026-04-16 06:40:16.626776 | orchestrator | =============================================================================== 2026-04-16 06:40:16.626787 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.46s 2026-04-16 06:40:16.626798 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.86s 2026-04-16 06:40:16.626809 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.31s 2026-04-16 06:40:16.626820 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.19s 2026-04-16 06:40:16.626830 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.07s 2026-04-16 06:40:16.626841 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.04s 2026-04-16 06:40:16.626851 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.81s 2026-04-16 06:40:16.626862 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.64s 2026-04-16 06:40:16.626873 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.63s 2026-04-16 06:40:16.626883 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.48s 2026-04-16 06:40:16.626894 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.33s 2026-04-16 06:40:16.626904 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.31s 2026-04-16 06:40:16.626915 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.23s 2026-04-16 06:40:16.626925 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.20s 2026-04-16 06:40:16.626936 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.18s 2026-04-16 06:40:16.626948 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-04-16 06:40:18.937501 | orchestrator | 2026-04-16 06:40:18 | INFO  | Task b19262b3-4bf7-48b6-adf8-b248a1f4784b (prometheus) was prepared for execution. 2026-04-16 06:40:18.937646 | orchestrator | 2026-04-16 06:40:18 | INFO  | It takes a moment until task b19262b3-4bf7-48b6-adf8-b248a1f4784b (prometheus) has been started and output is visible here. 2026-04-16 06:40:27.878364 | orchestrator | 2026-04-16 06:40:27.878480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:40:27.878498 | orchestrator | 2026-04-16 06:40:27.878511 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:40:27.878523 | orchestrator | Thursday 16 April 2026 06:40:23 +0000 (0:00:00.261) 0:00:00.261 ******** 2026-04-16 06:40:27.878534 | orchestrator | ok: [testbed-manager] 2026-04-16 06:40:27.878546 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:40:27.878558 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:40:27.878569 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:40:27.878580 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:40:27.878590 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:40:27.878601 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:40:27.878612 | orchestrator | 2026-04-16 06:40:27.878623 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:40:27.878634 | orchestrator | Thursday 16 April 2026 06:40:23 +0000 (0:00:00.811) 0:00:01.073 ******** 2026-04-16 06:40:27.878646 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878657 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878668 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878709 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878721 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878732 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878743 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-16 06:40:27.878754 | orchestrator | 2026-04-16 06:40:27.878765 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-16 06:40:27.878776 | orchestrator | 2026-04-16 06:40:27.878787 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-16 06:40:27.878798 | orchestrator | Thursday 16 April 2026 06:40:24 +0000 (0:00:00.863) 0:00:01.936 ******** 2026-04-16 06:40:27.878809 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:40:27.878822 | orchestrator | 2026-04-16 06:40:27.878834 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-16 06:40:27.878845 | orchestrator | Thursday 16 April 2026 06:40:26 +0000 (0:00:01.398) 0:00:03.335 ******** 2026-04-16 06:40:27.878860 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 06:40:27.878877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:27.878919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:27.878945 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:27.878979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:27.878993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:27.879006 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:27.879020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:27.879032 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:27.879043 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:27.879065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:27.879088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:28.925857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.925965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:28.925981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:28.925994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926009 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 06:40:28.926123 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:28.926197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926219 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926239 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:28.926250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:28.926262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:28.926288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:33.838338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:33.838457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:33.838480 | orchestrator | 2026-04-16 06:40:33.838497 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-16 06:40:33.838514 | orchestrator | Thursday 16 April 2026 06:40:28 +0000 (0:00:02.823) 0:00:06.158 ******** 2026-04-16 06:40:33.838529 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 06:40:33.838545 | orchestrator | 2026-04-16 06:40:33.838560 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-16 06:40:33.838574 | orchestrator | Thursday 16 April 2026 06:40:30 +0000 (0:00:01.673) 0:00:07.832 ******** 2026-04-16 06:40:33.838617 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 06:40:33.838635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838806 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:33.838846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:33.838864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:33.838878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:33.838899 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:33.838925 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.894977 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:35.895133 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 06:40:35.895162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:35.895175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:35.895205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895238 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895249 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:35.895261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:35.895301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:35.895322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:36.677336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:36.677453 | orchestrator | 2026-04-16 06:40:36.677470 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-16 06:40:36.677484 | orchestrator | Thursday 16 April 2026 06:40:35 +0000 (0:00:05.296) 0:00:13.128 ******** 2026-04-16 06:40:36.677497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:36.677509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:36.677521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:36.677536 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-16 06:40:36.677591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:36.677622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:36.677662 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:36.677675 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:36.677736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-16 06:40:36.677770 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:36.677799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:36.677818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:36.677863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:37.322101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:37.351614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:37.351727 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:40:37.351746 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:40:37.351760 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:40:37.351773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:37.351787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:37.351821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:37.351836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:37.351874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:37.351888 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:40:37.351931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:37.351945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:37.351958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 06:40:37.351969 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:40:37.351980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:37.351992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:37.352009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 06:40:37.352028 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:40:37.352039 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:37.352059 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:38.157397 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 06:40:38.157500 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:40:38.157517 | orchestrator | 2026-04-16 06:40:38.157529 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-16 06:40:38.157542 | orchestrator | Thursday 16 April 2026 06:40:37 +0000 (0:00:01.418) 0:00:14.547 ******** 2026-04-16 06:40:38.157554 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-04-16 06:40:38.157568 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:38.157581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:38.157634 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-04-16 06:40:38.157670 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:38.157710 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:40:38.157722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:38.157734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:38.157745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:38.157757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:38.157773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:38.157793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:38.157804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:38.157823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:39.289948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:39.290086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:39.290105 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:40:39.290117 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:40:39.290129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:39.290141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:39.290189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:39.290199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:39.290206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 06:40:39.290212 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:40:39.290234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:39.290241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:39.290247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 06:40:39.290254 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:40:39.290260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:39.290276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:39.290283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 06:40:39.290289 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:40:39.290295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 06:40:39.290306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 06:40:42.689802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 06:40:42.689911 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:40:42.689929 | orchestrator | 2026-04-16 06:40:42.689943 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-16 06:40:42.689957 | orchestrator | Thursday 16 April 2026 06:40:39 +0000 (0:00:01.967) 0:00:16.514 ******** 2026-04-16 06:40:42.689970 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 06:40:42.690009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690154 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690177 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:40:42.690198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:42.690210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:42.690227 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:42.690240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:42.690252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:42.690272 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.386882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:45.387041 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:45.387082 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 06:40:45.387098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:45.387127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387139 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387189 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:45.387201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:40:45.387212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:45.387224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:45.387246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:40:49.247935 | orchestrator | 2026-04-16 06:40:49.248069 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-16 06:40:49.248095 | orchestrator | Thursday 16 April 2026 06:40:45 +0000 (0:00:06.095) 0:00:22.609 ******** 2026-04-16 06:40:49.248115 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 06:40:49.248134 | orchestrator | 2026-04-16 06:40:49.248154 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-16 06:40:49.248174 | orchestrator | Thursday 16 April 2026 06:40:46 +0000 (0:00:00.816) 0:00:23.426 ******** 2026-04-16 06:40:49.248188 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248204 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248233 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:40:49.248245 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248259 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248270 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248341 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248352 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248369 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088732, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8616874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248380 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248391 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248403 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:49.248430 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650717 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650823 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650847 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650865 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650872 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650955 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650964 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.650986 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088902, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:40:50.650993 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.651004 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.651010 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.651017 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.651029 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.651036 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:50.651047 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794485 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794605 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794621 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794634 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794670 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794748 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794760 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794791 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794808 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794820 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088696, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.86056, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:40:51.794831 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794851 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794862 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794874 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:51.794893 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974509 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974610 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974653 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974667 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974742 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974757 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974769 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974799 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974819 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974840 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974852 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974863 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974874 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974886 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:52.974905 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.969667 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.969908 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.969935 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.969956 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088805, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8888965, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:40:53.969977 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.969997 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970098 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970169 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970185 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970199 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970212 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970225 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970238 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970251 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:53.970294 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404515 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404531 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404544 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404556 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404568 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088691, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8556874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:40:55.404618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404648 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404660 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404672 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404732 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404744 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404755 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404779 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:55.404800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417634 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417792 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417810 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417821 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088736, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8627388, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:40:56.417852 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417876 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417915 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417927 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417938 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417948 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417966 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:40:56.417979 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.417993 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.418004 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:40:56.418071 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:00.971931 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8746877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:00.972040 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972058 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972072 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972110 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972138 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972150 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:00.972163 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972206 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972217 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972236 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:00.972249 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088743, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8726876, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:00.972260 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972271 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:00.972290 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-04-16 06:41:00.972302 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:00.972323 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088729, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8615987, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.089846 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088897, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8902683, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.089961 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088596, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8433888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.089978 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088937, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090069 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088894, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8895123, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090088 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088694, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.856329, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090113 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088598, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8436177, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090125 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088800, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090157 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8736875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090170 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088933, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.895829, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-04-16 06:41:10.090191 | orchestrator | 2026-04-16 06:41:10.090205 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-16 06:41:10.090218 | orchestrator | Thursday 16 April 2026 06:41:07 +0000 (0:00:21.365) 0:00:44.791 ******** 2026-04-16 06:41:10.090229 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 06:41:10.090241 | orchestrator | 2026-04-16 06:41:10.090252 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-16 06:41:10.090263 | orchestrator | Thursday 16 April 2026 06:41:08 +0000 (0:00:00.735) 0:00:45.527 ******** 2026-04-16 06:41:10.090274 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090286 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090297 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090319 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090330 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090344 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090358 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090383 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090396 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090422 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090435 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090449 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090462 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090474 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090488 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090514 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090526 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090539 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090551 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090582 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090595 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090608 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090621 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090646 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090659 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:10.090672 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090755 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-16 06:41:10.090766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 06:41:10.090778 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-16 06:41:10.090789 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 06:41:10.090800 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 06:41:10.090818 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 06:41:10.090829 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:41:10.090840 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 06:41:10.090851 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 06:41:10.090862 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 06:41:10.090873 | orchestrator | 2026-04-16 06:41:10.090892 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-16 06:41:35.678116 | orchestrator | Thursday 16 April 2026 06:41:10 +0000 (0:00:01.787) 0:00:47.315 ******** 2026-04-16 06:41:35.678232 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 06:41:35.678251 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.678264 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 06:41:35.678276 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.678287 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 06:41:35.678298 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.678309 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 06:41:35.678319 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.678330 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 06:41:35.678341 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.678352 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 06:41:35.678363 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.678374 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-16 06:41:35.678385 | orchestrator | 2026-04-16 06:41:35.678397 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-16 06:41:35.678408 | orchestrator | Thursday 16 April 2026 06:41:23 +0000 (0:00:13.333) 0:01:00.648 ******** 2026-04-16 06:41:35.678418 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 06:41:35.678430 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 06:41:35.678440 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 06:41:35.678451 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.678462 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.678473 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.678483 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 06:41:35.678494 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.678505 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 06:41:35.678516 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.678526 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 06:41:35.678537 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.678548 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-16 06:41:35.678559 | orchestrator | 2026-04-16 06:41:35.678570 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-16 06:41:35.678582 | orchestrator | Thursday 16 April 2026 06:41:25 +0000 (0:00:02.279) 0:01:02.928 ******** 2026-04-16 06:41:35.678593 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 06:41:35.678606 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 06:41:35.678640 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.678652 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.678663 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 06:41:35.678698 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.678723 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 06:41:35.678735 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.678746 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 06:41:35.678757 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.678767 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-16 06:41:35.678778 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 06:41:35.678789 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.678800 | orchestrator | 2026-04-16 06:41:35.678810 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-16 06:41:35.678821 | orchestrator | Thursday 16 April 2026 06:41:27 +0000 (0:00:01.450) 0:01:04.378 ******** 2026-04-16 06:41:35.678832 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 06:41:35.678843 | orchestrator | 2026-04-16 06:41:35.678854 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-16 06:41:35.678865 | orchestrator | Thursday 16 April 2026 06:41:27 +0000 (0:00:00.721) 0:01:05.100 ******** 2026-04-16 06:41:35.678876 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:41:35.678886 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.678897 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.678908 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.678935 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.678947 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.678958 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.678969 | orchestrator | 2026-04-16 06:41:35.678980 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-16 06:41:35.678990 | orchestrator | Thursday 16 April 2026 06:41:28 +0000 (0:00:00.590) 0:01:05.691 ******** 2026-04-16 06:41:35.679001 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:41:35.679012 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.679023 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.679033 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.679044 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:41:35.679055 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:41:35.679066 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:41:35.679076 | orchestrator | 2026-04-16 06:41:35.679087 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-16 06:41:35.679098 | orchestrator | Thursday 16 April 2026 06:41:30 +0000 (0:00:01.883) 0:01:07.575 ******** 2026-04-16 06:41:35.679109 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679120 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679131 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:41:35.679142 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679153 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679163 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679174 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.679193 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.679204 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.679215 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.679226 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679237 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.679248 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 06:41:35.679259 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.679269 | orchestrator | 2026-04-16 06:41:35.679280 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-16 06:41:35.679291 | orchestrator | Thursday 16 April 2026 06:41:31 +0000 (0:00:01.395) 0:01:08.970 ******** 2026-04-16 06:41:35.679302 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 06:41:35.679313 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.679324 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 06:41:35.679335 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.679346 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 06:41:35.679357 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.679367 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 06:41:35.679378 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.679389 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 06:41:35.679400 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.679410 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 06:41:35.679436 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.679448 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-16 06:41:35.679458 | orchestrator | 2026-04-16 06:41:35.679474 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-16 06:41:35.679485 | orchestrator | Thursday 16 April 2026 06:41:33 +0000 (0:00:01.452) 0:01:10.423 ******** 2026-04-16 06:41:35.679496 | orchestrator | [WARNING]: Skipped 2026-04-16 06:41:35.679508 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-16 06:41:35.679518 | orchestrator | due to this access issue: 2026-04-16 06:41:35.679529 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-16 06:41:35.679540 | orchestrator | not a directory 2026-04-16 06:41:35.679550 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 06:41:35.679561 | orchestrator | 2026-04-16 06:41:35.679572 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-16 06:41:35.679582 | orchestrator | Thursday 16 April 2026 06:41:34 +0000 (0:00:01.087) 0:01:11.511 ******** 2026-04-16 06:41:35.679593 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:41:35.679604 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.679615 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.679625 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:35.679636 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:35.679646 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:35.679657 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:35.679667 | orchestrator | 2026-04-16 06:41:35.679708 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-16 06:41:35.679720 | orchestrator | Thursday 16 April 2026 06:41:35 +0000 (0:00:00.944) 0:01:12.455 ******** 2026-04-16 06:41:35.679731 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:41:35.679741 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:41:35.679759 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:41:35.679777 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:41:38.465631 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:41:38.465760 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:41:38.465774 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:41:38.465783 | orchestrator | 2026-04-16 06:41:38.465791 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-04-16 06:41:38.465801 | orchestrator | Thursday 16 April 2026 06:41:36 +0000 (0:00:00.859) 0:01:13.315 ******** 2026-04-16 06:41:38.465812 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-04-16 06:41:38.465823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465869 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:38.465931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:38.465939 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 06:41:38.465946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:38.465954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:38.465966 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:38.465974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:38.465993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.887648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.887856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.887894 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-04-16 06:41:41.887912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.887943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.887989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.888009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.888054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.888075 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.888093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.888110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 06:41:41.888137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.888172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.888191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 06:41:41.888211 | orchestrator | 2026-04-16 06:41:41.888232 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-04-16 06:41:41.888255 | orchestrator | Thursday 16 April 2026 06:41:39 +0000 (0:00:03.860) 0:01:17.176 ******** 2026-04-16 06:41:41.888275 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-16 06:41:41.888293 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:41:41.888312 | orchestrator | 2026-04-16 06:41:41.888344 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383057 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:01.210) 0:01:18.386 ******** 2026-04-16 06:43:21.383197 | orchestrator | 2026-04-16 06:43:21.383222 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383241 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.288) 0:01:18.675 ******** 2026-04-16 06:43:21.383257 | orchestrator | 2026-04-16 06:43:21.383275 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383292 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.070) 0:01:18.746 ******** 2026-04-16 06:43:21.383308 | orchestrator | 2026-04-16 06:43:21.383324 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383340 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.068) 0:01:18.814 ******** 2026-04-16 06:43:21.383356 | orchestrator | 2026-04-16 06:43:21.383372 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383387 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.073) 0:01:18.888 ******** 2026-04-16 06:43:21.383403 | orchestrator | 2026-04-16 06:43:21.383419 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383435 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.070) 0:01:18.958 ******** 2026-04-16 06:43:21.383451 | orchestrator | 2026-04-16 06:43:21.383467 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 06:43:21.383482 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.065) 0:01:19.024 ******** 2026-04-16 06:43:21.383498 | orchestrator | 2026-04-16 06:43:21.383514 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-16 06:43:21.383529 | orchestrator | Thursday 16 April 2026 06:41:41 +0000 (0:00:00.091) 0:01:19.115 ******** 2026-04-16 06:43:21.383546 | orchestrator | changed: [testbed-manager] 2026-04-16 06:43:21.383564 | orchestrator | 2026-04-16 06:43:21.383582 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-16 06:43:21.383598 | orchestrator | Thursday 16 April 2026 06:42:03 +0000 (0:00:21.643) 0:01:40.759 ******** 2026-04-16 06:43:21.383615 | orchestrator | changed: [testbed-manager] 2026-04-16 06:43:21.383632 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:43:21.383710 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:43:21.383730 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:43:21.383747 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:43:21.383765 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:43:21.383783 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:43:21.383800 | orchestrator | 2026-04-16 06:43:21.383818 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-16 06:43:21.383836 | orchestrator | Thursday 16 April 2026 06:42:11 +0000 (0:00:07.796) 0:01:48.555 ******** 2026-04-16 06:43:21.383853 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:43:21.383871 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:43:21.383889 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:43:21.383907 | orchestrator | 2026-04-16 06:43:21.383926 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-16 06:43:21.383944 | orchestrator | Thursday 16 April 2026 06:42:21 +0000 (0:00:10.287) 0:01:58.842 ******** 2026-04-16 06:43:21.383961 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:43:21.383978 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:43:21.383995 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:43:21.384011 | orchestrator | 2026-04-16 06:43:21.384028 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-16 06:43:21.384045 | orchestrator | Thursday 16 April 2026 06:42:31 +0000 (0:00:10.200) 0:02:09.043 ******** 2026-04-16 06:43:21.384061 | orchestrator | changed: [testbed-manager] 2026-04-16 06:43:21.384078 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:43:21.384094 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:43:21.384129 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:43:21.384147 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:43:21.384163 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:43:21.384180 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:43:21.384196 | orchestrator | 2026-04-16 06:43:21.384212 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-16 06:43:21.384229 | orchestrator | Thursday 16 April 2026 06:42:46 +0000 (0:00:14.284) 0:02:23.327 ******** 2026-04-16 06:43:21.384245 | orchestrator | changed: [testbed-manager] 2026-04-16 06:43:21.384262 | orchestrator | 2026-04-16 06:43:21.384279 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-16 06:43:21.384295 | orchestrator | Thursday 16 April 2026 06:42:54 +0000 (0:00:08.452) 0:02:31.780 ******** 2026-04-16 06:43:21.384313 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:43:21.384330 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:43:21.384346 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:43:21.384363 | orchestrator | 2026-04-16 06:43:21.384379 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-16 06:43:21.384396 | orchestrator | Thursday 16 April 2026 06:43:05 +0000 (0:00:10.787) 0:02:42.567 ******** 2026-04-16 06:43:21.384413 | orchestrator | changed: [testbed-manager] 2026-04-16 06:43:21.384430 | orchestrator | 2026-04-16 06:43:21.384446 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-16 06:43:21.384463 | orchestrator | Thursday 16 April 2026 06:43:10 +0000 (0:00:05.574) 0:02:48.141 ******** 2026-04-16 06:43:21.384480 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:43:21.384496 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:43:21.384513 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:43:21.384530 | orchestrator | 2026-04-16 06:43:21.384546 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:43:21.384564 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-16 06:43:21.384582 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-16 06:43:21.384622 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-16 06:43:21.384651 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-16 06:43:21.384729 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 06:43:21.384748 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 06:43:21.384764 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 06:43:21.384779 | orchestrator | 2026-04-16 06:43:21.384795 | orchestrator | 2026-04-16 06:43:21.384811 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:43:21.384827 | orchestrator | Thursday 16 April 2026 06:43:20 +0000 (0:00:09.959) 0:02:58.101 ******** 2026-04-16 06:43:21.384843 | orchestrator | =============================================================================== 2026-04-16 06:43:21.384875 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.64s 2026-04-16 06:43:21.384892 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.37s 2026-04-16 06:43:21.384921 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.28s 2026-04-16 06:43:21.384937 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.33s 2026-04-16 06:43:21.384953 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.79s 2026-04-16 06:43:21.384968 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.29s 2026-04-16 06:43:21.384983 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.20s 2026-04-16 06:43:21.384999 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.96s 2026-04-16 06:43:21.385015 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.45s 2026-04-16 06:43:21.385030 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 7.80s 2026-04-16 06:43:21.385046 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.10s 2026-04-16 06:43:21.385061 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.57s 2026-04-16 06:43:21.385077 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.30s 2026-04-16 06:43:21.385093 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.86s 2026-04-16 06:43:21.385108 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.82s 2026-04-16 06:43:21.385123 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.28s 2026-04-16 06:43:21.385139 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.97s 2026-04-16 06:43:21.385155 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.88s 2026-04-16 06:43:21.385171 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.79s 2026-04-16 06:43:21.385194 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.67s 2026-04-16 06:43:24.429488 | orchestrator | 2026-04-16 06:43:24 | INFO  | Task 799c1e04-07c7-46ae-b7cc-3ffa671ce3c7 (grafana) was prepared for execution. 2026-04-16 06:43:24.429568 | orchestrator | 2026-04-16 06:43:24 | INFO  | It takes a moment until task 799c1e04-07c7-46ae-b7cc-3ffa671ce3c7 (grafana) has been started and output is visible here. 2026-04-16 06:43:33.366351 | orchestrator | 2026-04-16 06:43:33.366479 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:43:33.366497 | orchestrator | 2026-04-16 06:43:33.366508 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:43:33.366519 | orchestrator | Thursday 16 April 2026 06:43:28 +0000 (0:00:00.191) 0:00:00.191 ******** 2026-04-16 06:43:33.366552 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:43:33.366564 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:43:33.366573 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:43:33.366582 | orchestrator | 2026-04-16 06:43:33.366592 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:43:33.366602 | orchestrator | Thursday 16 April 2026 06:43:28 +0000 (0:00:00.248) 0:00:00.439 ******** 2026-04-16 06:43:33.366612 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-16 06:43:33.366622 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-16 06:43:33.366631 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-16 06:43:33.366641 | orchestrator | 2026-04-16 06:43:33.366650 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-16 06:43:33.366660 | orchestrator | 2026-04-16 06:43:33.366697 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-16 06:43:33.366709 | orchestrator | Thursday 16 April 2026 06:43:28 +0000 (0:00:00.318) 0:00:00.758 ******** 2026-04-16 06:43:33.366727 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:43:33.366754 | orchestrator | 2026-04-16 06:43:33.366771 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-16 06:43:33.366786 | orchestrator | Thursday 16 April 2026 06:43:29 +0000 (0:00:00.461) 0:00:01.219 ******** 2026-04-16 06:43:33.366806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:33.366828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:33.366842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:33.366858 | orchestrator | 2026-04-16 06:43:33.366876 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-16 06:43:33.366893 | orchestrator | Thursday 16 April 2026 06:43:30 +0000 (0:00:00.870) 0:00:02.090 ******** 2026-04-16 06:43:33.366909 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-04-16 06:43:33.366940 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-04-16 06:43:33.366952 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:43:33.366963 | orchestrator | 2026-04-16 06:43:33.366974 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-16 06:43:33.366999 | orchestrator | Thursday 16 April 2026 06:43:31 +0000 (0:00:00.745) 0:00:02.835 ******** 2026-04-16 06:43:33.367010 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:43:33.367022 | orchestrator | 2026-04-16 06:43:33.367033 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-16 06:43:33.367044 | orchestrator | Thursday 16 April 2026 06:43:31 +0000 (0:00:00.522) 0:00:03.357 ******** 2026-04-16 06:43:33.367077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:33.367090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:33.367102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:33.367113 | orchestrator | 2026-04-16 06:43:33.367124 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-16 06:43:33.367135 | orchestrator | Thursday 16 April 2026 06:43:32 +0000 (0:00:01.292) 0:00:04.650 ******** 2026-04-16 06:43:33.367147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 06:43:33.367159 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:43:33.367170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 06:43:33.367188 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:43:33.367214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 06:43:39.784256 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:43:39.784365 | orchestrator | 2026-04-16 06:43:39.784383 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-16 06:43:39.784397 | orchestrator | Thursday 16 April 2026 06:43:33 +0000 (0:00:00.529) 0:00:05.179 ******** 2026-04-16 06:43:39.784411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 06:43:39.784427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 06:43:39.784439 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:43:39.784451 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:43:39.784462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-04-16 06:43:39.784474 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:43:39.784485 | orchestrator | 2026-04-16 06:43:39.784533 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-16 06:43:39.784553 | orchestrator | Thursday 16 April 2026 06:43:33 +0000 (0:00:00.530) 0:00:05.709 ******** 2026-04-16 06:43:39.784572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:39.784611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:39.784650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:39.784662 | orchestrator | 2026-04-16 06:43:39.784754 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-16 06:43:39.784767 | orchestrator | Thursday 16 April 2026 06:43:35 +0000 (0:00:01.249) 0:00:06.958 ******** 2026-04-16 06:43:39.784779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:39.784794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:39.784807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:43:39.784831 | orchestrator | 2026-04-16 06:43:39.784844 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-16 06:43:39.784857 | orchestrator | Thursday 16 April 2026 06:43:36 +0000 (0:00:01.452) 0:00:08.411 ******** 2026-04-16 06:43:39.784869 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:43:39.784882 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:43:39.784909 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:43:39.784933 | orchestrator | 2026-04-16 06:43:39.784946 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-16 06:43:39.784959 | orchestrator | Thursday 16 April 2026 06:43:36 +0000 (0:00:00.280) 0:00:08.691 ******** 2026-04-16 06:43:39.784971 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-16 06:43:39.784984 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-16 06:43:39.784996 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-16 06:43:39.785008 | orchestrator | 2026-04-16 06:43:39.785027 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-16 06:43:39.785040 | orchestrator | Thursday 16 April 2026 06:43:38 +0000 (0:00:01.258) 0:00:09.949 ******** 2026-04-16 06:43:39.785053 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-16 06:43:39.785066 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-16 06:43:39.785079 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-16 06:43:39.785092 | orchestrator | 2026-04-16 06:43:39.785104 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-04-16 06:43:39.785126 | orchestrator | Thursday 16 April 2026 06:43:39 +0000 (0:00:01.639) 0:00:11.588 ******** 2026-04-16 06:43:46.063728 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:43:46.063858 | orchestrator | 2026-04-16 06:43:46.063881 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-04-16 06:43:46.063900 | orchestrator | Thursday 16 April 2026 06:43:40 +0000 (0:00:00.742) 0:00:12.331 ******** 2026-04-16 06:43:46.063916 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-04-16 06:43:46.063933 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-04-16 06:43:46.063951 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:43:46.063968 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:43:46.063985 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:43:46.064002 | orchestrator | 2026-04-16 06:43:46.064019 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-04-16 06:43:46.064035 | orchestrator | Thursday 16 April 2026 06:43:41 +0000 (0:00:00.754) 0:00:13.085 ******** 2026-04-16 06:43:46.064052 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:43:46.064068 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:43:46.064084 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:43:46.064101 | orchestrator | 2026-04-16 06:43:46.064118 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-16 06:43:46.064134 | orchestrator | Thursday 16 April 2026 06:43:41 +0000 (0:00:00.317) 0:00:13.403 ******** 2026-04-16 06:43:46.064183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088097, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.724024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088097, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.724024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088097, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.724024, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088358, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.787686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088358, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.787686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088358, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.787686, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088151, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7336853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088151, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7336853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088151, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7336853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088360, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7917953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088360, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7917953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:46.064457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088360, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7917953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088169, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7397814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088169, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7397814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088169, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7397814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088174, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7866862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088174, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7866862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088174, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7866862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088055, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.722074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088055, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.722074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088055, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.722074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088107, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.730769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088107, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.730769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088107, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.730769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:49.685766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088155, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7346852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088155, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7346852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088155, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7346852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088171, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7417045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088171, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7417045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088171, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7417045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088355, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7875774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088355, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7875774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088355, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7875774, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088144, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7334409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088144, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7334409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088144, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7334409, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088173, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.742988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:53.636303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088173, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.742988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088173, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.742988, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088170, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7406855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088170, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7406855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088170, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7406855, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088165, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7397814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088165, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7397814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088165, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7397814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088161, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7380285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088161, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7380285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.693993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088161, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7380285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.694007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088172, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.694142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088172, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:43:57.694165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088172, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7419302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.449904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088156, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.736793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088156, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.736793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088156, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.736793, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088354, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7866862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088354, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7866862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088354, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7866862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088579, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8418648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088579, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8418648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088579, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8418648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088411, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8129997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088411, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8129997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088411, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8129997, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:01.450184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088389, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7948961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088389, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7948961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088389, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7948961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088472, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8157966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088472, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8157966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088472, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8157966, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088370, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7924817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088370, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7924817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088370, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7924817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088534, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.83189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088534, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.83189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088534, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.83189, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088475, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8266869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:05.012780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088475, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8266869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.288989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088475, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8266869, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088538, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8329995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088538, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8329995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088538, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8329995, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088572, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8404698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088572, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8404698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088572, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8404698, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088527, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8286867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088527, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8286867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088527, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8286867, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088462, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8145845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088462, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8145845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:09.289361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088462, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8145845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088404, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7978015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088404, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7978015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088404, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7978015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088459, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.813874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088459, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.813874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088459, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.813874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088393, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.796282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088393, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.796282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088393, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.796282, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088468, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.81524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088468, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.81524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088468, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.81524, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:12.734752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088552, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8399854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088552, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8399854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088552, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8399854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088544, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.834687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088544, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.834687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088544, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.834687, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088375, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7937222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088375, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7937222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088375, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7937222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088384, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7941084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088384, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7941084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088384, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.7941084, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088524, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8276868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:44:16.637989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088524, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8276868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:45:57.957669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088524, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.8276868, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:45:57.957855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088541, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.833708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:45:57.957877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088541, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.833708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:45:57.957890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088541, 'dev': 112, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1776314932.833708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-04-16 06:45:57.957902 | orchestrator | 2026-04-16 06:45:57.957915 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-04-16 06:45:57.957928 | orchestrator | Thursday 16 April 2026 06:44:17 +0000 (0:00:36.273) 0:00:49.676 ******** 2026-04-16 06:45:57.957961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:45:57.957993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:45:57.958011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-04-16 06:45:57.958077 | orchestrator | 2026-04-16 06:45:57.958088 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-04-16 06:45:57.958100 | orchestrator | Thursday 16 April 2026 06:44:18 +0000 (0:00:01.016) 0:00:50.693 ******** 2026-04-16 06:45:57.958111 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:45:57.958123 | orchestrator | 2026-04-16 06:45:57.958134 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-04-16 06:45:57.958145 | orchestrator | Thursday 16 April 2026 06:44:21 +0000 (0:00:02.428) 0:00:53.122 ******** 2026-04-16 06:45:57.958187 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:45:57.958200 | orchestrator | 2026-04-16 06:45:57.958211 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-16 06:45:57.958222 | orchestrator | Thursday 16 April 2026 06:44:23 +0000 (0:00:02.259) 0:00:55.381 ******** 2026-04-16 06:45:57.958233 | orchestrator | 2026-04-16 06:45:57.958244 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-16 06:45:57.958255 | orchestrator | Thursday 16 April 2026 06:44:23 +0000 (0:00:00.069) 0:00:55.450 ******** 2026-04-16 06:45:57.958265 | orchestrator | 2026-04-16 06:45:57.958276 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-16 06:45:57.958287 | orchestrator | Thursday 16 April 2026 06:44:23 +0000 (0:00:00.072) 0:00:55.523 ******** 2026-04-16 06:45:57.958298 | orchestrator | 2026-04-16 06:45:57.958309 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-16 06:45:57.958319 | orchestrator | Thursday 16 April 2026 06:44:23 +0000 (0:00:00.069) 0:00:55.592 ******** 2026-04-16 06:45:57.958330 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:45:57.958341 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:45:57.958352 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:45:57.958362 | orchestrator | 2026-04-16 06:45:57.958373 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-16 06:45:57.958393 | orchestrator | Thursday 16 April 2026 06:44:26 +0000 (0:00:02.251) 0:00:57.844 ******** 2026-04-16 06:45:57.958404 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:45:57.958415 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:45:57.958426 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-16 06:45:57.958438 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-16 06:45:57.958449 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-04-16 06:45:57.958460 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-04-16 06:45:57.958471 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:45:57.958483 | orchestrator | 2026-04-16 06:45:57.958493 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-16 06:45:57.958504 | orchestrator | Thursday 16 April 2026 06:45:16 +0000 (0:00:50.459) 0:01:48.303 ******** 2026-04-16 06:45:57.958515 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:45:57.958526 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:45:57.958536 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:45:57.958547 | orchestrator | 2026-04-16 06:45:57.958558 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-04-16 06:45:57.958569 | orchestrator | Thursday 16 April 2026 06:45:52 +0000 (0:00:36.210) 0:02:24.513 ******** 2026-04-16 06:45:57.958580 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:45:57.958591 | orchestrator | 2026-04-16 06:45:57.958601 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-04-16 06:45:57.958612 | orchestrator | Thursday 16 April 2026 06:45:54 +0000 (0:00:02.268) 0:02:26.782 ******** 2026-04-16 06:45:57.958623 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:45:57.958634 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:45:57.958645 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:45:57.958655 | orchestrator | 2026-04-16 06:45:57.958666 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-04-16 06:45:57.958677 | orchestrator | Thursday 16 April 2026 06:45:55 +0000 (0:00:00.336) 0:02:27.118 ******** 2026-04-16 06:45:57.958689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-04-16 06:45:57.958711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-04-16 06:45:58.543133 | orchestrator | 2026-04-16 06:45:58.543257 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-04-16 06:45:58.543283 | orchestrator | Thursday 16 April 2026 06:45:57 +0000 (0:00:02.640) 0:02:29.759 ******** 2026-04-16 06:45:58.543302 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:45:58.543322 | orchestrator | 2026-04-16 06:45:58.543341 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:45:58.543361 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:45:58.543383 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:45:58.543425 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 06:45:58.543473 | orchestrator | 2026-04-16 06:45:58.543494 | orchestrator | 2026-04-16 06:45:58.543512 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:45:58.543530 | orchestrator | Thursday 16 April 2026 06:45:58 +0000 (0:00:00.288) 0:02:30.047 ******** 2026-04-16 06:45:58.543548 | orchestrator | =============================================================================== 2026-04-16 06:45:58.543567 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.46s 2026-04-16 06:45:58.543586 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.27s 2026-04-16 06:45:58.543605 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 36.21s 2026-04-16 06:45:58.543623 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.64s 2026-04-16 06:45:58.543640 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.43s 2026-04-16 06:45:58.543660 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.27s 2026-04-16 06:45:58.543678 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.26s 2026-04-16 06:45:58.543721 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.25s 2026-04-16 06:45:58.543742 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.64s 2026-04-16 06:45:58.543761 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2026-04-16 06:45:58.543854 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.29s 2026-04-16 06:45:58.543874 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.26s 2026-04-16 06:45:58.543892 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2026-04-16 06:45:58.543911 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2026-04-16 06:45:58.543930 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.87s 2026-04-16 06:45:58.543949 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-04-16 06:45:58.543969 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.75s 2026-04-16 06:45:58.543987 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2026-04-16 06:45:58.544007 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.53s 2026-04-16 06:45:58.544027 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.53s 2026-04-16 06:45:58.876885 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-04-16 06:45:58.884577 | orchestrator | + set -e 2026-04-16 06:45:58.884706 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 06:45:58.885050 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 06:45:58.885068 | orchestrator | ++ INTERACTIVE=false 2026-04-16 06:45:58.885076 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 06:45:58.885084 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 06:45:58.885228 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 06:45:58.886325 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 06:45:58.886347 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 06:45:58.886355 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 06:45:58.886363 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 06:45:58.886372 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 06:45:58.886380 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 06:45:58.886388 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 06:45:58.886396 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 06:45:58.886404 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 06:45:58.886412 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 06:45:58.886419 | orchestrator | ++ export ARA=false 2026-04-16 06:45:58.886427 | orchestrator | ++ ARA=false 2026-04-16 06:45:58.886435 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 06:45:58.886443 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 06:45:58.886451 | orchestrator | ++ export TEMPEST=false 2026-04-16 06:45:58.886458 | orchestrator | ++ TEMPEST=false 2026-04-16 06:45:58.886466 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 06:45:58.886474 | orchestrator | ++ IS_ZUUL=true 2026-04-16 06:45:58.886482 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 06:45:58.886514 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 06:45:58.886522 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 06:45:58.886530 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 06:45:58.886538 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 06:45:58.886546 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 06:45:58.886553 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 06:45:58.886561 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 06:45:58.886569 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 06:45:58.886577 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 06:45:58.887489 | orchestrator | ++ semver 9.5.0 8.0.0 2026-04-16 06:45:58.953086 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 06:45:58.953171 | orchestrator | + osism apply clusterapi 2026-04-16 06:46:00.994574 | orchestrator | 2026-04-16 06:46:00 | INFO  | Task 25f7bc96-aac2-4569-9b76-93ba83b0979e (clusterapi) was prepared for execution. 2026-04-16 06:46:00.995037 | orchestrator | 2026-04-16 06:46:00 | INFO  | It takes a moment until task 25f7bc96-aac2-4569-9b76-93ba83b0979e (clusterapi) has been started and output is visible here. 2026-04-16 06:47:14.791834 | orchestrator | 2026-04-16 06:47:14.792017 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-16 06:47:14.792028 | orchestrator | 2026-04-16 06:47:14.792033 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-16 06:47:14.792038 | orchestrator | Thursday 16 April 2026 06:46:05 +0000 (0:00:00.236) 0:00:00.236 ******** 2026-04-16 06:47:14.792043 | orchestrator | included: cert_manager for testbed-manager 2026-04-16 06:47:14.792048 | orchestrator | 2026-04-16 06:47:14.792052 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-16 06:47:14.792057 | orchestrator | Thursday 16 April 2026 06:46:05 +0000 (0:00:00.243) 0:00:00.480 ******** 2026-04-16 06:47:14.792061 | orchestrator | changed: [testbed-manager] 2026-04-16 06:47:14.792066 | orchestrator | 2026-04-16 06:47:14.792070 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-16 06:47:14.792073 | orchestrator | Thursday 16 April 2026 06:46:10 +0000 (0:00:05.316) 0:00:05.797 ******** 2026-04-16 06:47:14.792077 | orchestrator | changed: [testbed-manager] 2026-04-16 06:47:14.792081 | orchestrator | 2026-04-16 06:47:14.792101 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-16 06:47:14.792105 | orchestrator | 2026-04-16 06:47:14.792109 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-16 06:47:14.792113 | orchestrator | Thursday 16 April 2026 06:46:53 +0000 (0:00:42.225) 0:00:48.022 ******** 2026-04-16 06:47:14.792117 | orchestrator | ok: [testbed-manager] 2026-04-16 06:47:14.792121 | orchestrator | 2026-04-16 06:47:14.792125 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-16 06:47:14.792129 | orchestrator | Thursday 16 April 2026 06:46:54 +0000 (0:00:01.059) 0:00:49.082 ******** 2026-04-16 06:47:14.792133 | orchestrator | ok: [testbed-manager] 2026-04-16 06:47:14.792137 | orchestrator | 2026-04-16 06:47:14.792141 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-16 06:47:14.792144 | orchestrator | Thursday 16 April 2026 06:46:54 +0000 (0:00:00.145) 0:00:49.228 ******** 2026-04-16 06:47:14.792148 | orchestrator | ok: [testbed-manager] 2026-04-16 06:47:14.792152 | orchestrator | 2026-04-16 06:47:14.792156 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-16 06:47:14.792159 | orchestrator | Thursday 16 April 2026 06:47:12 +0000 (0:00:17.726) 0:01:06.954 ******** 2026-04-16 06:47:14.792163 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:47:14.792167 | orchestrator | 2026-04-16 06:47:14.792171 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-16 06:47:14.792174 | orchestrator | Thursday 16 April 2026 06:47:12 +0000 (0:00:00.140) 0:01:07.094 ******** 2026-04-16 06:47:14.792178 | orchestrator | changed: [testbed-manager] 2026-04-16 06:47:14.792182 | orchestrator | 2026-04-16 06:47:14.792186 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:47:14.792190 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 06:47:14.792215 | orchestrator | 2026-04-16 06:47:14.792219 | orchestrator | 2026-04-16 06:47:14.792223 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:47:14.792227 | orchestrator | Thursday 16 April 2026 06:47:14 +0000 (0:00:02.213) 0:01:09.307 ******** 2026-04-16 06:47:14.792230 | orchestrator | =============================================================================== 2026-04-16 06:47:14.792234 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 42.23s 2026-04-16 06:47:14.792238 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.73s 2026-04-16 06:47:14.792241 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.32s 2026-04-16 06:47:14.792245 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.21s 2026-04-16 06:47:14.792249 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.06s 2026-04-16 06:47:14.792252 | orchestrator | Include cert_manager role ----------------------------------------------- 0.24s 2026-04-16 06:47:14.792256 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.15s 2026-04-16 06:47:14.792260 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-04-16 06:47:15.122471 | orchestrator | + osism apply magnum 2026-04-16 06:47:17.110982 | orchestrator | 2026-04-16 06:47:17 | INFO  | Task e1a677d8-61ea-498d-a285-a668e48b6f42 (magnum) was prepared for execution. 2026-04-16 06:47:17.111110 | orchestrator | 2026-04-16 06:47:17 | INFO  | It takes a moment until task e1a677d8-61ea-498d-a285-a668e48b6f42 (magnum) has been started and output is visible here. 2026-04-16 06:48:00.789847 | orchestrator | 2026-04-16 06:48:00.790055 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:48:00.790080 | orchestrator | 2026-04-16 06:48:00.790091 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:48:00.790101 | orchestrator | Thursday 16 April 2026 06:47:20 +0000 (0:00:00.234) 0:00:00.234 ******** 2026-04-16 06:48:00.790111 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:48:00.790122 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:48:00.790131 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:48:00.790140 | orchestrator | 2026-04-16 06:48:00.790149 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:48:00.790158 | orchestrator | Thursday 16 April 2026 06:47:21 +0000 (0:00:00.278) 0:00:00.513 ******** 2026-04-16 06:48:00.790167 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-16 06:48:00.790176 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-16 06:48:00.790184 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-16 06:48:00.790191 | orchestrator | 2026-04-16 06:48:00.790199 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-16 06:48:00.790208 | orchestrator | 2026-04-16 06:48:00.790227 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-16 06:48:00.790238 | orchestrator | Thursday 16 April 2026 06:47:21 +0000 (0:00:00.331) 0:00:00.845 ******** 2026-04-16 06:48:00.790256 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:48:00.790266 | orchestrator | 2026-04-16 06:48:00.790276 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-04-16 06:48:00.790285 | orchestrator | Thursday 16 April 2026 06:47:22 +0000 (0:00:00.474) 0:00:01.319 ******** 2026-04-16 06:48:00.790296 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-04-16 06:48:00.790305 | orchestrator | 2026-04-16 06:48:00.790315 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-04-16 06:48:00.790324 | orchestrator | Thursday 16 April 2026 06:47:25 +0000 (0:00:03.709) 0:00:05.029 ******** 2026-04-16 06:48:00.790334 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-04-16 06:48:00.790380 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-04-16 06:48:00.790391 | orchestrator | 2026-04-16 06:48:00.790401 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-04-16 06:48:00.790410 | orchestrator | Thursday 16 April 2026 06:47:32 +0000 (0:00:06.839) 0:00:11.869 ******** 2026-04-16 06:48:00.790420 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 06:48:00.790430 | orchestrator | 2026-04-16 06:48:00.790440 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-04-16 06:48:00.790450 | orchestrator | Thursday 16 April 2026 06:47:36 +0000 (0:00:03.743) 0:00:15.612 ******** 2026-04-16 06:48:00.790461 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 06:48:00.790471 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-04-16 06:48:00.790481 | orchestrator | 2026-04-16 06:48:00.790491 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-04-16 06:48:00.790500 | orchestrator | Thursday 16 April 2026 06:47:40 +0000 (0:00:04.109) 0:00:19.722 ******** 2026-04-16 06:48:00.790510 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 06:48:00.790519 | orchestrator | 2026-04-16 06:48:00.790529 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-04-16 06:48:00.790538 | orchestrator | Thursday 16 April 2026 06:47:43 +0000 (0:00:03.445) 0:00:23.168 ******** 2026-04-16 06:48:00.790547 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-04-16 06:48:00.790557 | orchestrator | 2026-04-16 06:48:00.790567 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-04-16 06:48:00.790576 | orchestrator | Thursday 16 April 2026 06:47:47 +0000 (0:00:03.835) 0:00:27.004 ******** 2026-04-16 06:48:00.790587 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:48:00.790596 | orchestrator | 2026-04-16 06:48:00.790605 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-04-16 06:48:00.790615 | orchestrator | Thursday 16 April 2026 06:47:51 +0000 (0:00:03.500) 0:00:30.504 ******** 2026-04-16 06:48:00.790625 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:48:00.790635 | orchestrator | 2026-04-16 06:48:00.790644 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-04-16 06:48:00.790654 | orchestrator | Thursday 16 April 2026 06:47:55 +0000 (0:00:04.176) 0:00:34.680 ******** 2026-04-16 06:48:00.790662 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:48:00.790672 | orchestrator | 2026-04-16 06:48:00.790681 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-16 06:48:00.790691 | orchestrator | Thursday 16 April 2026 06:47:59 +0000 (0:00:03.748) 0:00:38.428 ******** 2026-04-16 06:48:00.790726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:00.790740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:00.790762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:00.790772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:00.790783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:00.790800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:07.997474 | orchestrator | 2026-04-16 06:48:07.997579 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-16 06:48:07.997596 | orchestrator | Thursday 16 April 2026 06:48:00 +0000 (0:00:01.606) 0:00:40.035 ******** 2026-04-16 06:48:07.997627 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:48:07.997638 | orchestrator | 2026-04-16 06:48:07.997648 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-16 06:48:07.997657 | orchestrator | Thursday 16 April 2026 06:48:00 +0000 (0:00:00.130) 0:00:40.165 ******** 2026-04-16 06:48:07.997666 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:48:07.997676 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:48:07.997685 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:48:07.997694 | orchestrator | 2026-04-16 06:48:07.997704 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-16 06:48:07.997712 | orchestrator | Thursday 16 April 2026 06:48:01 +0000 (0:00:00.317) 0:00:40.483 ******** 2026-04-16 06:48:07.997722 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 06:48:07.997732 | orchestrator | 2026-04-16 06:48:07.997741 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-16 06:48:07.997751 | orchestrator | Thursday 16 April 2026 06:48:02 +0000 (0:00:00.835) 0:00:41.319 ******** 2026-04-16 06:48:07.997778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:07.997792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:07.997802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:07.997832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:07.997851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:07.997865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:07.997875 | orchestrator | 2026-04-16 06:48:07.997884 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-16 06:48:07.997894 | orchestrator | Thursday 16 April 2026 06:48:04 +0000 (0:00:02.351) 0:00:43.671 ******** 2026-04-16 06:48:07.997903 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:48:07.997914 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:48:07.998122 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:48:07.998132 | orchestrator | 2026-04-16 06:48:07.998142 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-16 06:48:07.998151 | orchestrator | Thursday 16 April 2026 06:48:04 +0000 (0:00:00.477) 0:00:44.149 ******** 2026-04-16 06:48:07.998162 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:48:07.998171 | orchestrator | 2026-04-16 06:48:07.998181 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-16 06:48:07.998190 | orchestrator | Thursday 16 April 2026 06:48:05 +0000 (0:00:00.567) 0:00:44.716 ******** 2026-04-16 06:48:07.998199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:07.998229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:08.906831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:08.906989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:08.907006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:08.907017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:08.907047 | orchestrator | 2026-04-16 06:48:08.907059 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-16 06:48:08.907070 | orchestrator | Thursday 16 April 2026 06:48:07 +0000 (0:00:02.534) 0:00:47.251 ******** 2026-04-16 06:48:08.907099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:08.907110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:08.907121 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:48:08.907138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:08.907149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:08.907166 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:48:08.907176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:08.907194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:12.374365 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:48:12.374460 | orchestrator | 2026-04-16 06:48:12.374473 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-16 06:48:12.374483 | orchestrator | Thursday 16 April 2026 06:48:08 +0000 (0:00:00.898) 0:00:48.149 ******** 2026-04-16 06:48:12.374509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:12.374522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:12.374531 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:48:12.374540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:12.374566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:12.374574 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:48:12.374596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:12.374610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:12.374618 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:48:12.374626 | orchestrator | 2026-04-16 06:48:12.374634 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-16 06:48:12.374643 | orchestrator | Thursday 16 April 2026 06:48:09 +0000 (0:00:00.892) 0:00:49.042 ******** 2026-04-16 06:48:12.374652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:12.374666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:12.374680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:18.313518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:18.313625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:18.313636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:18.313659 | orchestrator | 2026-04-16 06:48:18.313667 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-16 06:48:18.313675 | orchestrator | Thursday 16 April 2026 06:48:12 +0000 (0:00:02.578) 0:00:51.621 ******** 2026-04-16 06:48:18.313682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:18.313701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:18.313711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:18.313718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:18.313730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:18.313736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:18.313743 | orchestrator | 2026-04-16 06:48:18.313749 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-16 06:48:18.313756 | orchestrator | Thursday 16 April 2026 06:48:17 +0000 (0:00:05.278) 0:00:56.899 ******** 2026-04-16 06:48:18.313768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:20.219274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:20.219379 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:48:20.219396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:20.219431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:20.219443 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:48:20.219454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-04-16 06:48:20.219485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 06:48:20.219497 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:48:20.219508 | orchestrator | 2026-04-16 06:48:20.219520 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-04-16 06:48:20.219533 | orchestrator | Thursday 16 April 2026 06:48:18 +0000 (0:00:00.670) 0:00:57.569 ******** 2026-04-16 06:48:20.219551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:20.219571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:20.219591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-04-16 06:48:20.219611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:48:20.219641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:49:13.617927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 06:49:13.618301 | orchestrator | 2026-04-16 06:49:13.618351 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-16 06:49:13.618374 | orchestrator | Thursday 16 April 2026 06:48:20 +0000 (0:00:01.893) 0:00:59.463 ******** 2026-04-16 06:49:13.618393 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:49:13.618413 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:49:13.618431 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:49:13.618450 | orchestrator | 2026-04-16 06:49:13.618469 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-04-16 06:49:13.618489 | orchestrator | Thursday 16 April 2026 06:48:20 +0000 (0:00:00.491) 0:00:59.954 ******** 2026-04-16 06:49:13.618508 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:49:13.618526 | orchestrator | 2026-04-16 06:49:13.618545 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-04-16 06:49:13.618565 | orchestrator | Thursday 16 April 2026 06:48:22 +0000 (0:00:02.231) 0:01:02.185 ******** 2026-04-16 06:49:13.618584 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:49:13.618602 | orchestrator | 2026-04-16 06:49:13.618621 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-16 06:49:13.618640 | orchestrator | Thursday 16 April 2026 06:48:25 +0000 (0:00:02.407) 0:01:04.593 ******** 2026-04-16 06:49:13.618658 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:49:13.618677 | orchestrator | 2026-04-16 06:49:13.618696 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-16 06:49:13.618715 | orchestrator | Thursday 16 April 2026 06:48:42 +0000 (0:00:16.831) 0:01:21.424 ******** 2026-04-16 06:49:13.618733 | orchestrator | 2026-04-16 06:49:13.618752 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-16 06:49:13.618771 | orchestrator | Thursday 16 April 2026 06:48:42 +0000 (0:00:00.071) 0:01:21.495 ******** 2026-04-16 06:49:13.618789 | orchestrator | 2026-04-16 06:49:13.618808 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-16 06:49:13.618825 | orchestrator | Thursday 16 April 2026 06:48:42 +0000 (0:00:00.071) 0:01:21.567 ******** 2026-04-16 06:49:13.618843 | orchestrator | 2026-04-16 06:49:13.618860 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-16 06:49:13.618880 | orchestrator | Thursday 16 April 2026 06:48:42 +0000 (0:00:00.070) 0:01:21.637 ******** 2026-04-16 06:49:13.618899 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:49:13.618917 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:49:13.618935 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:49:13.618952 | orchestrator | 2026-04-16 06:49:13.618970 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-16 06:49:13.618989 | orchestrator | Thursday 16 April 2026 06:49:02 +0000 (0:00:19.844) 0:01:41.481 ******** 2026-04-16 06:49:13.619044 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:49:13.619064 | orchestrator | changed: [testbed-node-1] 2026-04-16 06:49:13.619082 | orchestrator | changed: [testbed-node-2] 2026-04-16 06:49:13.619099 | orchestrator | 2026-04-16 06:49:13.619118 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:49:13.619139 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 06:49:13.619171 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:49:13.619182 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 06:49:13.619193 | orchestrator | 2026-04-16 06:49:13.619204 | orchestrator | 2026-04-16 06:49:13.619215 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:49:13.619225 | orchestrator | Thursday 16 April 2026 06:49:13 +0000 (0:00:11.055) 0:01:52.537 ******** 2026-04-16 06:49:13.619235 | orchestrator | =============================================================================== 2026-04-16 06:49:13.619245 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.84s 2026-04-16 06:49:13.619254 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.83s 2026-04-16 06:49:13.619263 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.06s 2026-04-16 06:49:13.619273 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.84s 2026-04-16 06:49:13.619282 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.28s 2026-04-16 06:49:13.619291 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.18s 2026-04-16 06:49:13.619301 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.11s 2026-04-16 06:49:13.619329 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.84s 2026-04-16 06:49:13.619340 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.75s 2026-04-16 06:49:13.619359 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.74s 2026-04-16 06:49:13.619368 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.71s 2026-04-16 06:49:13.619378 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.50s 2026-04-16 06:49:13.619387 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.45s 2026-04-16 06:49:13.619397 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.58s 2026-04-16 06:49:13.619406 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.53s 2026-04-16 06:49:13.619415 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.41s 2026-04-16 06:49:13.619425 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.35s 2026-04-16 06:49:13.619434 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.23s 2026-04-16 06:49:13.619443 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.89s 2026-04-16 06:49:13.619453 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.61s 2026-04-16 06:49:14.339429 | orchestrator | ok: Runtime: 1:38:08.671255 2026-04-16 06:49:14.606131 | 2026-04-16 06:49:14.606321 | TASK [Deploy in a nutshell] 2026-04-16 06:49:15.140751 | orchestrator | skipping: Conditional result was False 2026-04-16 06:49:15.164539 | 2026-04-16 06:49:15.164702 | TASK [Bootstrap services] 2026-04-16 06:49:15.871672 | orchestrator | 2026-04-16 06:49:15.871886 | orchestrator | # BOOTSTRAP 2026-04-16 06:49:15.871909 | orchestrator | 2026-04-16 06:49:15.871915 | orchestrator | + set -e 2026-04-16 06:49:15.871920 | orchestrator | + echo 2026-04-16 06:49:15.871926 | orchestrator | + echo '# BOOTSTRAP' 2026-04-16 06:49:15.871934 | orchestrator | + echo 2026-04-16 06:49:15.871966 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-04-16 06:49:15.880238 | orchestrator | + set -e 2026-04-16 06:49:15.880302 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-04-16 06:49:17.977075 | orchestrator | 2026-04-16 06:49:17 | INFO  | It takes a moment until task 449f83dd-77dd-4f6e-a766-9d6bdbbd9e26 (flavor-manager) has been started and output is visible here. 2026-04-16 06:49:25.493644 | orchestrator | 2026-04-16 06:49:20 | INFO  | Flavor SCS-1L-1 created 2026-04-16 06:49:25.493762 | orchestrator | 2026-04-16 06:49:21 | INFO  | Flavor SCS-1L-1-5 created 2026-04-16 06:49:25.493777 | orchestrator | 2026-04-16 06:49:21 | INFO  | Flavor SCS-1V-2 created 2026-04-16 06:49:25.493787 | orchestrator | 2026-04-16 06:49:21 | INFO  | Flavor SCS-1V-2-5 created 2026-04-16 06:49:25.493795 | orchestrator | 2026-04-16 06:49:21 | INFO  | Flavor SCS-1V-4 created 2026-04-16 06:49:25.493804 | orchestrator | 2026-04-16 06:49:21 | INFO  | Flavor SCS-1V-4-10 created 2026-04-16 06:49:25.493812 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-1V-8 created 2026-04-16 06:49:25.493822 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-1V-8-20 created 2026-04-16 06:49:25.493839 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-2V-4 created 2026-04-16 06:49:25.493847 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-2V-4-10 created 2026-04-16 06:49:25.493856 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-2V-8 created 2026-04-16 06:49:25.493864 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-2V-8-20 created 2026-04-16 06:49:25.493872 | orchestrator | 2026-04-16 06:49:22 | INFO  | Flavor SCS-2V-16 created 2026-04-16 06:49:25.493880 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-2V-16-50 created 2026-04-16 06:49:25.493888 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-4V-8 created 2026-04-16 06:49:25.493896 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-4V-8-20 created 2026-04-16 06:49:25.493904 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-4V-16 created 2026-04-16 06:49:25.493912 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-4V-16-50 created 2026-04-16 06:49:25.493920 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-4V-32 created 2026-04-16 06:49:25.493928 | orchestrator | 2026-04-16 06:49:23 | INFO  | Flavor SCS-4V-32-100 created 2026-04-16 06:49:25.493936 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-8V-16 created 2026-04-16 06:49:25.493944 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-8V-16-50 created 2026-04-16 06:49:25.493952 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-8V-32 created 2026-04-16 06:49:25.493960 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-8V-32-100 created 2026-04-16 06:49:25.493968 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-16V-32 created 2026-04-16 06:49:25.493976 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-16V-32-100 created 2026-04-16 06:49:25.493984 | orchestrator | 2026-04-16 06:49:24 | INFO  | Flavor SCS-2V-4-20s created 2026-04-16 06:49:25.493992 | orchestrator | 2026-04-16 06:49:25 | INFO  | Flavor SCS-4V-8-50s created 2026-04-16 06:49:25.494000 | orchestrator | 2026-04-16 06:49:25 | INFO  | Flavor SCS-8V-32-100s created 2026-04-16 06:49:27.804206 | orchestrator | 2026-04-16 06:49:27 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-04-16 06:49:37.925638 | orchestrator | 2026-04-16 06:49:37 | INFO  | Task 08c3b246-5976-4b2c-9e9c-e6ae0a4be103 (bootstrap-basic) was prepared for execution. 2026-04-16 06:49:37.925778 | orchestrator | 2026-04-16 06:49:37 | INFO  | It takes a moment until task 08c3b246-5976-4b2c-9e9c-e6ae0a4be103 (bootstrap-basic) has been started and output is visible here. 2026-04-16 06:50:19.808701 | orchestrator | 2026-04-16 06:50:19.808797 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-04-16 06:50:19.808808 | orchestrator | 2026-04-16 06:50:19.808815 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 06:50:19.808823 | orchestrator | Thursday 16 April 2026 06:49:42 +0000 (0:00:00.066) 0:00:00.066 ******** 2026-04-16 06:50:19.808830 | orchestrator | ok: [localhost] 2026-04-16 06:50:19.808838 | orchestrator | 2026-04-16 06:50:19.808845 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-04-16 06:50:19.808852 | orchestrator | Thursday 16 April 2026 06:49:43 +0000 (0:00:01.875) 0:00:01.942 ******** 2026-04-16 06:50:19.808860 | orchestrator | ok: [localhost] 2026-04-16 06:50:19.808867 | orchestrator | 2026-04-16 06:50:19.808874 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-04-16 06:50:19.808882 | orchestrator | Thursday 16 April 2026 06:49:50 +0000 (0:00:06.744) 0:00:08.687 ******** 2026-04-16 06:50:19.808889 | orchestrator | changed: [localhost] 2026-04-16 06:50:19.808897 | orchestrator | 2026-04-16 06:50:19.808904 | orchestrator | TASK [Create public network] *************************************************** 2026-04-16 06:50:19.808912 | orchestrator | Thursday 16 April 2026 06:49:56 +0000 (0:00:06.032) 0:00:14.720 ******** 2026-04-16 06:50:19.808919 | orchestrator | changed: [localhost] 2026-04-16 06:50:19.808926 | orchestrator | 2026-04-16 06:50:19.808933 | orchestrator | TASK [Set public network to default] ******************************************* 2026-04-16 06:50:19.808941 | orchestrator | Thursday 16 April 2026 06:50:02 +0000 (0:00:05.386) 0:00:20.106 ******** 2026-04-16 06:50:19.808951 | orchestrator | changed: [localhost] 2026-04-16 06:50:19.808959 | orchestrator | 2026-04-16 06:50:19.808966 | orchestrator | TASK [Create public subnet] **************************************************** 2026-04-16 06:50:19.808973 | orchestrator | Thursday 16 April 2026 06:50:08 +0000 (0:00:06.101) 0:00:26.208 ******** 2026-04-16 06:50:19.808981 | orchestrator | changed: [localhost] 2026-04-16 06:50:19.808988 | orchestrator | 2026-04-16 06:50:19.808995 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-04-16 06:50:19.809002 | orchestrator | Thursday 16 April 2026 06:50:12 +0000 (0:00:04.255) 0:00:30.464 ******** 2026-04-16 06:50:19.809010 | orchestrator | changed: [localhost] 2026-04-16 06:50:19.809017 | orchestrator | 2026-04-16 06:50:19.809024 | orchestrator | TASK [Create manager role] ***************************************************** 2026-04-16 06:50:19.809040 | orchestrator | Thursday 16 April 2026 06:50:16 +0000 (0:00:03.605) 0:00:34.069 ******** 2026-04-16 06:50:19.809048 | orchestrator | ok: [localhost] 2026-04-16 06:50:19.809055 | orchestrator | 2026-04-16 06:50:19.809093 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:50:19.809106 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 06:50:19.809119 | orchestrator | 2026-04-16 06:50:19.809132 | orchestrator | 2026-04-16 06:50:19.809143 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:50:19.809156 | orchestrator | Thursday 16 April 2026 06:50:19 +0000 (0:00:03.436) 0:00:37.505 ******** 2026-04-16 06:50:19.809164 | orchestrator | =============================================================================== 2026-04-16 06:50:19.809171 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.74s 2026-04-16 06:50:19.809179 | orchestrator | Set public network to default ------------------------------------------- 6.10s 2026-04-16 06:50:19.809186 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.03s 2026-04-16 06:50:19.809193 | orchestrator | Create public network --------------------------------------------------- 5.39s 2026-04-16 06:50:19.809217 | orchestrator | Create public subnet ---------------------------------------------------- 4.26s 2026-04-16 06:50:19.809224 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.61s 2026-04-16 06:50:19.809232 | orchestrator | Create manager role ----------------------------------------------------- 3.44s 2026-04-16 06:50:19.809239 | orchestrator | Gathering Facts --------------------------------------------------------- 1.88s 2026-04-16 06:50:22.263823 | orchestrator | 2026-04-16 06:50:22 | INFO  | It takes a moment until task 1d77d1c1-ae7d-4863-9f02-5396221f7950 (image-manager) has been started and output is visible here. 2026-04-16 06:51:06.049021 | orchestrator | 2026-04-16 06:50:24 | INFO  | Processing image 'Cirros 0.6.2' 2026-04-16 06:51:06.049134 | orchestrator | 2026-04-16 06:50:25 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-04-16 06:51:06.049147 | orchestrator | 2026-04-16 06:50:25 | INFO  | Importing image Cirros 0.6.2 2026-04-16 06:51:06.049154 | orchestrator | 2026-04-16 06:50:25 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-16 06:51:06.049161 | orchestrator | 2026-04-16 06:50:27 | INFO  | Waiting for image to leave queued state... 2026-04-16 06:51:06.049168 | orchestrator | 2026-04-16 06:50:29 | INFO  | Waiting for import to complete... 2026-04-16 06:51:06.049174 | orchestrator | 2026-04-16 06:50:39 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-04-16 06:51:06.049181 | orchestrator | 2026-04-16 06:50:39 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-04-16 06:51:06.049187 | orchestrator | 2026-04-16 06:50:39 | INFO  | Setting internal_version = 0.6.2 2026-04-16 06:51:06.049193 | orchestrator | 2026-04-16 06:50:39 | INFO  | Setting image_original_user = cirros 2026-04-16 06:51:06.049199 | orchestrator | 2026-04-16 06:50:39 | INFO  | Adding tag os:cirros 2026-04-16 06:51:06.049205 | orchestrator | 2026-04-16 06:50:40 | INFO  | Setting property architecture: x86_64 2026-04-16 06:51:06.049211 | orchestrator | 2026-04-16 06:50:40 | INFO  | Setting property hw_disk_bus: scsi 2026-04-16 06:51:06.049217 | orchestrator | 2026-04-16 06:50:40 | INFO  | Setting property hw_rng_model: virtio 2026-04-16 06:51:06.049223 | orchestrator | 2026-04-16 06:50:41 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-16 06:51:06.049229 | orchestrator | 2026-04-16 06:50:41 | INFO  | Setting property hw_watchdog_action: reset 2026-04-16 06:51:06.049235 | orchestrator | 2026-04-16 06:50:41 | INFO  | Setting property hypervisor_type: qemu 2026-04-16 06:51:06.049240 | orchestrator | 2026-04-16 06:50:41 | INFO  | Setting property os_distro: cirros 2026-04-16 06:51:06.049246 | orchestrator | 2026-04-16 06:50:42 | INFO  | Setting property os_purpose: minimal 2026-04-16 06:51:06.049252 | orchestrator | 2026-04-16 06:50:42 | INFO  | Setting property replace_frequency: never 2026-04-16 06:51:06.049258 | orchestrator | 2026-04-16 06:50:42 | INFO  | Setting property uuid_validity: none 2026-04-16 06:51:06.049264 | orchestrator | 2026-04-16 06:50:42 | INFO  | Setting property provided_until: none 2026-04-16 06:51:06.049269 | orchestrator | 2026-04-16 06:50:43 | INFO  | Setting property image_description: Cirros 2026-04-16 06:51:06.049275 | orchestrator | 2026-04-16 06:50:43 | INFO  | Setting property image_name: Cirros 2026-04-16 06:51:06.049281 | orchestrator | 2026-04-16 06:50:44 | INFO  | Setting property internal_version: 0.6.2 2026-04-16 06:51:06.049287 | orchestrator | 2026-04-16 06:50:44 | INFO  | Setting property image_original_user: cirros 2026-04-16 06:51:06.049310 | orchestrator | 2026-04-16 06:50:44 | INFO  | Setting property os_version: 0.6.2 2026-04-16 06:51:06.049323 | orchestrator | 2026-04-16 06:50:44 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-04-16 06:51:06.049331 | orchestrator | 2026-04-16 06:50:45 | INFO  | Setting property image_build_date: 2023-05-30 2026-04-16 06:51:06.049337 | orchestrator | 2026-04-16 06:50:45 | INFO  | Checking status of 'Cirros 0.6.2' 2026-04-16 06:51:06.049343 | orchestrator | 2026-04-16 06:50:45 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-04-16 06:51:06.049349 | orchestrator | 2026-04-16 06:50:45 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-04-16 06:51:06.049355 | orchestrator | 2026-04-16 06:50:45 | INFO  | Processing image 'Cirros 0.6.3' 2026-04-16 06:51:06.049364 | orchestrator | 2026-04-16 06:50:45 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-04-16 06:51:06.049370 | orchestrator | 2026-04-16 06:50:45 | INFO  | Importing image Cirros 0.6.3 2026-04-16 06:51:06.049376 | orchestrator | 2026-04-16 06:50:45 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-16 06:51:06.049382 | orchestrator | 2026-04-16 06:50:47 | INFO  | Waiting for image to leave queued state... 2026-04-16 06:51:06.049387 | orchestrator | 2026-04-16 06:50:49 | INFO  | Waiting for import to complete... 2026-04-16 06:51:06.049405 | orchestrator | 2026-04-16 06:50:59 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-04-16 06:51:06.049411 | orchestrator | 2026-04-16 06:51:00 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-04-16 06:51:06.049417 | orchestrator | 2026-04-16 06:51:00 | INFO  | Setting internal_version = 0.6.3 2026-04-16 06:51:06.049423 | orchestrator | 2026-04-16 06:51:00 | INFO  | Setting image_original_user = cirros 2026-04-16 06:51:06.049428 | orchestrator | 2026-04-16 06:51:00 | INFO  | Adding tag os:cirros 2026-04-16 06:51:06.049434 | orchestrator | 2026-04-16 06:51:00 | INFO  | Setting property architecture: x86_64 2026-04-16 06:51:06.049440 | orchestrator | 2026-04-16 06:51:00 | INFO  | Setting property hw_disk_bus: scsi 2026-04-16 06:51:06.049446 | orchestrator | 2026-04-16 06:51:00 | INFO  | Setting property hw_rng_model: virtio 2026-04-16 06:51:06.049451 | orchestrator | 2026-04-16 06:51:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-16 06:51:06.049457 | orchestrator | 2026-04-16 06:51:01 | INFO  | Setting property hw_watchdog_action: reset 2026-04-16 06:51:06.049463 | orchestrator | 2026-04-16 06:51:01 | INFO  | Setting property hypervisor_type: qemu 2026-04-16 06:51:06.049469 | orchestrator | 2026-04-16 06:51:02 | INFO  | Setting property os_distro: cirros 2026-04-16 06:51:06.049475 | orchestrator | 2026-04-16 06:51:02 | INFO  | Setting property os_purpose: minimal 2026-04-16 06:51:06.049480 | orchestrator | 2026-04-16 06:51:02 | INFO  | Setting property replace_frequency: never 2026-04-16 06:51:06.049486 | orchestrator | 2026-04-16 06:51:02 | INFO  | Setting property uuid_validity: none 2026-04-16 06:51:06.049492 | orchestrator | 2026-04-16 06:51:02 | INFO  | Setting property provided_until: none 2026-04-16 06:51:06.049498 | orchestrator | 2026-04-16 06:51:03 | INFO  | Setting property image_description: Cirros 2026-04-16 06:51:06.049504 | orchestrator | 2026-04-16 06:51:03 | INFO  | Setting property image_name: Cirros 2026-04-16 06:51:06.049509 | orchestrator | 2026-04-16 06:51:03 | INFO  | Setting property internal_version: 0.6.3 2026-04-16 06:51:06.049520 | orchestrator | 2026-04-16 06:51:03 | INFO  | Setting property image_original_user: cirros 2026-04-16 06:51:06.049526 | orchestrator | 2026-04-16 06:51:04 | INFO  | Setting property os_version: 0.6.3 2026-04-16 06:51:06.049532 | orchestrator | 2026-04-16 06:51:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-04-16 06:51:06.049538 | orchestrator | 2026-04-16 06:51:04 | INFO  | Setting property image_build_date: 2024-09-26 2026-04-16 06:51:06.049544 | orchestrator | 2026-04-16 06:51:04 | INFO  | Checking status of 'Cirros 0.6.3' 2026-04-16 06:51:06.049550 | orchestrator | 2026-04-16 06:51:04 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-04-16 06:51:06.049555 | orchestrator | 2026-04-16 06:51:04 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-04-16 06:51:06.380572 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amphora-image.sh 2026-04-16 06:51:08.658092 | orchestrator | 2026-04-16 06:51:08 | INFO  | date: 2026-04-16 2026-04-16 06:51:08.658254 | orchestrator | 2026-04-16 06:51:08 | INFO  | image: octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-16 06:51:08.658292 | orchestrator | 2026-04-16 06:51:08 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-16 06:51:08.658392 | orchestrator | 2026-04-16 06:51:08 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2.CHECKSUM 2026-04-16 06:51:08.814376 | orchestrator | 2026-04-16 06:51:08 | INFO  | checksum: d0860f46848f6ee8ed337cc33d5ba7e96db2ef81fcfd28d6d9ee3a3b596108d8 2026-04-16 06:51:08.886603 | orchestrator | 2026-04-16 06:51:08 | INFO  | It takes a moment until task c72bd107-5697-4fdc-9143-d2d9567c2801 (image-manager) has been started and output is visible here. 2026-04-16 06:52:11.374210 | orchestrator | 2026-04-16 06:51:11 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-04-16' 2026-04-16 06:52:11.374318 | orchestrator | 2026-04-16 06:51:11 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2: 200 2026-04-16 06:52:11.374334 | orchestrator | 2026-04-16 06:51:11 | INFO  | Importing image OpenStack Octavia Amphora 2026-04-16 2026-04-16 06:52:11.374344 | orchestrator | 2026-04-16 06:51:11 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-16 06:52:11.374355 | orchestrator | 2026-04-16 06:51:12 | INFO  | Waiting for image to leave queued state... 2026-04-16 06:52:11.374364 | orchestrator | 2026-04-16 06:51:14 | INFO  | Waiting for import to complete... 2026-04-16 06:52:11.374373 | orchestrator | 2026-04-16 06:51:24 | INFO  | Waiting for import to complete... 2026-04-16 06:52:11.374382 | orchestrator | 2026-04-16 06:51:34 | INFO  | Waiting for import to complete... 2026-04-16 06:52:11.374391 | orchestrator | 2026-04-16 06:51:45 | INFO  | Waiting for import to complete... 2026-04-16 06:52:11.374401 | orchestrator | 2026-04-16 06:51:55 | INFO  | Waiting for import to complete... 2026-04-16 06:52:11.374411 | orchestrator | 2026-04-16 06:52:05 | INFO  | Import of 'OpenStack Octavia Amphora 2026-04-16' successfully completed, reloading images 2026-04-16 06:52:11.374421 | orchestrator | 2026-04-16 06:52:05 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-04-16' 2026-04-16 06:52:11.374430 | orchestrator | 2026-04-16 06:52:05 | INFO  | Setting internal_version = 2026-04-16 2026-04-16 06:52:11.374460 | orchestrator | 2026-04-16 06:52:05 | INFO  | Setting image_original_user = ubuntu 2026-04-16 06:52:11.374469 | orchestrator | 2026-04-16 06:52:05 | INFO  | Adding tag amphora 2026-04-16 06:52:11.374478 | orchestrator | 2026-04-16 06:52:06 | INFO  | Adding tag os:ubuntu 2026-04-16 06:52:11.374487 | orchestrator | 2026-04-16 06:52:06 | INFO  | Setting property architecture: x86_64 2026-04-16 06:52:11.374496 | orchestrator | 2026-04-16 06:52:06 | INFO  | Setting property hw_disk_bus: scsi 2026-04-16 06:52:11.374504 | orchestrator | 2026-04-16 06:52:06 | INFO  | Setting property hw_rng_model: virtio 2026-04-16 06:52:11.374513 | orchestrator | 2026-04-16 06:52:06 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-04-16 06:52:11.374522 | orchestrator | 2026-04-16 06:52:07 | INFO  | Setting property hw_watchdog_action: reset 2026-04-16 06:52:11.374531 | orchestrator | 2026-04-16 06:52:07 | INFO  | Setting property hypervisor_type: qemu 2026-04-16 06:52:11.374540 | orchestrator | 2026-04-16 06:52:07 | INFO  | Setting property os_distro: ubuntu 2026-04-16 06:52:11.374548 | orchestrator | 2026-04-16 06:52:07 | INFO  | Setting property replace_frequency: quarterly 2026-04-16 06:52:11.374557 | orchestrator | 2026-04-16 06:52:08 | INFO  | Setting property uuid_validity: last-1 2026-04-16 06:52:11.374565 | orchestrator | 2026-04-16 06:52:08 | INFO  | Setting property provided_until: none 2026-04-16 06:52:11.374574 | orchestrator | 2026-04-16 06:52:08 | INFO  | Setting property os_purpose: network 2026-04-16 06:52:11.374582 | orchestrator | 2026-04-16 06:52:08 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-04-16 06:52:11.374604 | orchestrator | 2026-04-16 06:52:09 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-04-16 06:52:11.374613 | orchestrator | 2026-04-16 06:52:09 | INFO  | Setting property internal_version: 2026-04-16 2026-04-16 06:52:11.374622 | orchestrator | 2026-04-16 06:52:09 | INFO  | Setting property image_original_user: ubuntu 2026-04-16 06:52:11.374630 | orchestrator | 2026-04-16 06:52:10 | INFO  | Setting property os_version: 2026-04-16 2026-04-16 06:52:11.374639 | orchestrator | 2026-04-16 06:52:10 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260416.qcow2 2026-04-16 06:52:11.374648 | orchestrator | 2026-04-16 06:52:10 | INFO  | Setting property image_build_date: 2026-04-16 2026-04-16 06:52:11.374657 | orchestrator | 2026-04-16 06:52:10 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-04-16' 2026-04-16 06:52:11.374667 | orchestrator | 2026-04-16 06:52:10 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-04-16' 2026-04-16 06:52:11.374677 | orchestrator | 2026-04-16 06:52:11 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-04-16 06:52:11.374703 | orchestrator | 2026-04-16 06:52:11 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-04-16 06:52:11.374715 | orchestrator | 2026-04-16 06:52:11 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-04-16 06:52:11.374725 | orchestrator | 2026-04-16 06:52:11 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-04-16 06:52:11.847036 | orchestrator | ok: Runtime: 0:02:56.236882 2026-04-16 06:52:11.867040 | 2026-04-16 06:52:11.867198 | TASK [Run checks] 2026-04-16 06:52:12.608434 | orchestrator | + set -e 2026-04-16 06:52:12.608642 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 06:52:12.608667 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 06:52:12.608689 | orchestrator | ++ INTERACTIVE=false 2026-04-16 06:52:12.608703 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 06:52:12.608715 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 06:52:12.608729 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 06:52:12.609525 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 06:52:12.616309 | orchestrator | 2026-04-16 06:52:12.616359 | orchestrator | # CHECK 2026-04-16 06:52:12.616371 | orchestrator | 2026-04-16 06:52:12.616383 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 06:52:12.616399 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 06:52:12.616411 | orchestrator | + echo 2026-04-16 06:52:12.616422 | orchestrator | + echo '# CHECK' 2026-04-16 06:52:12.616433 | orchestrator | + echo 2026-04-16 06:52:12.616448 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 06:52:12.617353 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-16 06:52:12.682666 | orchestrator | 2026-04-16 06:52:12.682800 | orchestrator | ## Containers @ testbed-manager 2026-04-16 06:52:12.682828 | orchestrator | 2026-04-16 06:52:12.682865 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 06:52:12.682886 | orchestrator | + echo 2026-04-16 06:52:12.682905 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-16 06:52:12.682925 | orchestrator | + echo 2026-04-16 06:52:12.682946 | orchestrator | + osism container testbed-manager ps 2026-04-16 06:52:14.641671 | orchestrator | 2026-04-16 06:52:14 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-04-16 06:52:15.010584 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 06:52:15.010695 | orchestrator | 31c3a4bfebed registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-04-16 06:52:15.010713 | orchestrator | 27703cb3eebf registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-04-16 06:52:15.010723 | orchestrator | 25fa9c9f9527 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-16 06:52:15.010731 | orchestrator | 4268f5020095 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-16 06:52:15.010740 | orchestrator | 53174e8ba7a9 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-04-16 06:52:15.010751 | orchestrator | b94deb5e8679 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 56 minutes ago Up 55 minutes cephclient 2026-04-16 06:52:15.010761 | orchestrator | 00300d65d8a1 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-16 06:52:15.010768 | orchestrator | 2ac35a9c8b63 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-16 06:52:15.010798 | orchestrator | c60b052af8b6 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-16 06:52:15.010806 | orchestrator | 9bf1f067a483 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-04-16 06:52:15.010814 | orchestrator | 8c226b4691a8 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-04-16 06:52:15.010822 | orchestrator | 6b19df446ac2 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-04-16 06:52:15.010830 | orchestrator | dab2e9c0b477 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-04-16 06:52:15.010837 | orchestrator | d9b6a90be2b5 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-16 06:52:15.010861 | orchestrator | 6796778a4aab registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-04-16 06:52:15.010876 | orchestrator | 7c4ca4ea5d66 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-04-16 06:52:15.010885 | orchestrator | 56629509135a registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-04-16 06:52:15.010892 | orchestrator | 03ff68d9853c registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-04-16 06:52:15.010898 | orchestrator | b8c4d3610f26 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-04-16 06:52:15.010905 | orchestrator | 3cdfa73f33b7 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-16 06:52:15.010912 | orchestrator | 9d8210e1741b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-04-16 06:52:15.010919 | orchestrator | b51af4ab9f9d registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-16 06:52:15.010932 | orchestrator | dbbe2ee1ab7e registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-16 06:52:15.010939 | orchestrator | 16fed27a60ab registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-16 06:52:15.010946 | orchestrator | 5ae40557b593 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-04-16 06:52:15.010954 | orchestrator | 881b97a8c49f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-04-16 06:52:15.010960 | orchestrator | 9a1c58d6f984 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-04-16 06:52:15.010967 | orchestrator | 44f3ed200034 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-04-16 06:52:15.010974 | orchestrator | d240ff99de54 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-04-16 06:52:15.010985 | orchestrator | 76ebafedf5b8 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-16 06:52:15.347283 | orchestrator | 2026-04-16 06:52:15.347392 | orchestrator | ## Images @ testbed-manager 2026-04-16 06:52:15.347409 | orchestrator | 2026-04-16 06:52:15.347422 | orchestrator | + echo 2026-04-16 06:52:15.347434 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-16 06:52:15.347445 | orchestrator | + echo 2026-04-16 06:52:15.347461 | orchestrator | + osism container testbed-manager images 2026-04-16 06:52:17.810081 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 06:52:17.810246 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9e238fdcbaa6 3 hours ago 238MB 2026-04-16 06:52:17.810263 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-16 06:52:17.810275 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-16 06:52:17.810287 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-16 06:52:17.810298 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 06:52:17.810309 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 06:52:17.810320 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 06:52:17.810336 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-16 06:52:17.810347 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 06:52:17.810390 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-16 06:52:17.810402 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-16 06:52:17.810413 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 06:52:17.810424 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-16 06:52:17.810435 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-16 06:52:17.810446 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-16 06:52:17.810457 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-16 06:52:17.810468 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-16 06:52:17.810479 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-16 06:52:17.810491 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 5 months ago 334MB 2026-04-16 06:52:17.810502 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-16 06:52:17.810513 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-16 06:52:17.810524 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-16 06:52:17.810535 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-16 06:52:17.810546 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-16 06:52:17.810557 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-16 06:52:18.121712 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 06:52:18.121850 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-16 06:52:18.158562 | orchestrator | 2026-04-16 06:52:18.158668 | orchestrator | ## Containers @ testbed-node-0 2026-04-16 06:52:18.158682 | orchestrator | 2026-04-16 06:52:18.158694 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 06:52:18.158705 | orchestrator | + echo 2026-04-16 06:52:18.158716 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-16 06:52:18.158728 | orchestrator | + echo 2026-04-16 06:52:18.158739 | orchestrator | + osism container testbed-node-0 ps 2026-04-16 06:52:20.503641 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 06:52:20.503767 | orchestrator | eac22b478686 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-16 06:52:20.503853 | orchestrator | 8850c86429e6 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-16 06:52:20.503877 | orchestrator | 0cf6f4dc4f16 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-04-16 06:52:20.503895 | orchestrator | 381a95b78ebd registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-16 06:52:20.503943 | orchestrator | e83f33b8fb42 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-16 06:52:20.503960 | orchestrator | ec7df623f90b registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-04-16 06:52:20.503985 | orchestrator | 8f9bbce7e991 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-16 06:52:20.504002 | orchestrator | 620b54fa5d87 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-16 06:52:20.504018 | orchestrator | 3ed85dbfd497 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-04-16 06:52:20.504036 | orchestrator | 8f7964da8fe2 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-16 06:52:20.504052 | orchestrator | eecceb74ed28 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-16 06:52:20.504069 | orchestrator | 839e12e3bc26 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-16 06:52:20.504085 | orchestrator | cd5c450ce4c6 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-04-16 06:52:20.504102 | orchestrator | 2fafa7ab4d02 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-04-16 06:52:20.504118 | orchestrator | 3de3d4cd03c4 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-04-16 06:52:20.504163 | orchestrator | f32883bf3f01 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-04-16 06:52:20.504182 | orchestrator | f3045ce90e31 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-04-16 06:52:20.504198 | orchestrator | 57e84e851fa0 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-04-16 06:52:20.504214 | orchestrator | ce9ebc1da296 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-04-16 06:52:20.504261 | orchestrator | 724e35aa9602 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-04-16 06:52:20.504279 | orchestrator | 010eebcc774c registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-04-16 06:52:20.504295 | orchestrator | 4721988c0d7b registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes octavia_driver_agent 2026-04-16 06:52:20.504321 | orchestrator | ad3461c493c8 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-04-16 06:52:20.504337 | orchestrator | c2827d6ef155 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-16 06:52:20.504354 | orchestrator | 41553c2228f9 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-16 06:52:20.504375 | orchestrator | b4b92f43bc02 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-16 06:52:20.504391 | orchestrator | bedf9b79ddc6 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-04-16 06:52:20.504407 | orchestrator | c052c2f0b039 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-04-16 06:52:20.504424 | orchestrator | 53c07f8417b0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-16 06:52:20.504440 | orchestrator | 0f902458105a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-04-16 06:52:20.504455 | orchestrator | ef9d0dbf19b6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-04-16 06:52:20.504473 | orchestrator | 297735939a39 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-04-16 06:52:20.504488 | orchestrator | 513ae9f11eb3 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_backup 2026-04-16 06:52:20.504505 | orchestrator | 46e86b8f19ef registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_volume 2026-04-16 06:52:20.504522 | orchestrator | c64e79b4ef34 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-04-16 06:52:20.504538 | orchestrator | 67fe09708f92 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-04-16 06:52:20.504554 | orchestrator | e6b07272729b registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) glance_api 2026-04-16 06:52:20.504570 | orchestrator | b4eb3a836194 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_console 2026-04-16 06:52:20.504586 | orchestrator | f02430e8bcf3 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-04-16 06:52:20.504614 | orchestrator | a4e1a61749fd registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) horizon 2026-04-16 06:52:20.504639 | orchestrator | c5decebdb4f0 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_novncproxy 2026-04-16 06:52:20.504656 | orchestrator | 2cfd1075fac0 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_conductor 2026-04-16 06:52:20.504679 | orchestrator | b8fd8aeaeec5 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_api 2026-04-16 06:52:20.504696 | orchestrator | 9c9a3cc10e46 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-04-16 06:52:20.504712 | orchestrator | f51a1fd03f9a registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) neutron_server 2026-04-16 06:52:20.504729 | orchestrator | a5b5e55f1018 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) placement_api 2026-04-16 06:52:20.504745 | orchestrator | 6921e1a12dda registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone 2026-04-16 06:52:20.504761 | orchestrator | 3e6d44b43490 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_fernet 2026-04-16 06:52:20.504778 | orchestrator | 2d2280823820 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) keystone_ssh 2026-04-16 06:52:20.504793 | orchestrator | b218d4171da3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 54 minutes ago Up 54 minutes ceph-mgr-testbed-node-0 2026-04-16 06:52:20.504810 | orchestrator | 4d204f7f887d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-16 06:52:20.504826 | orchestrator | 7ecc09e53bd0 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-04-16 06:52:20.504843 | orchestrator | fcab0759c756 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-16 06:52:20.504858 | orchestrator | 0f025e83815e registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-16 06:52:20.504874 | orchestrator | bd57e0d0d020 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-16 06:52:20.504891 | orchestrator | 069be63d9303 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-16 06:52:20.504913 | orchestrator | 2371beabbea2 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-16 06:52:20.504930 | orchestrator | 0b4767d77fb9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-16 06:52:20.504955 | orchestrator | d9831e8ba79c registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-16 06:52:20.504979 | orchestrator | 200d87b80997 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-16 06:52:20.504997 | orchestrator | 647eec75c9c1 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-16 06:52:20.505012 | orchestrator | 5e51e7c7ae04 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-16 06:52:20.505027 | orchestrator | 7e8f2559c666 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-16 06:52:20.505042 | orchestrator | 383d31d75168 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-04-16 06:52:20.505059 | orchestrator | 5ebfea38d919 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-04-16 06:52:20.505075 | orchestrator | b97ed5f4978c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-04-16 06:52:20.505091 | orchestrator | a96c62025f1c registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-04-16 06:52:20.505107 | orchestrator | 1fb07f0a6637 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-04-16 06:52:20.505123 | orchestrator | b8bcbfbda26f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-16 06:52:20.505177 | orchestrator | 999414600d27 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-16 06:52:20.505194 | orchestrator | 9a1543361a2d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-16 06:52:20.788380 | orchestrator | 2026-04-16 06:52:20.788500 | orchestrator | ## Images @ testbed-node-0 2026-04-16 06:52:20.788518 | orchestrator | 2026-04-16 06:52:20.788532 | orchestrator | + echo 2026-04-16 06:52:20.788545 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-16 06:52:20.788558 | orchestrator | + echo 2026-04-16 06:52:20.788569 | orchestrator | + osism container testbed-node-0 images 2026-04-16 06:52:23.252923 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 06:52:23.253053 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-16 06:52:23.253070 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-16 06:52:23.253082 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-16 06:52:23.253100 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-16 06:52:23.253129 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-16 06:52:23.253198 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 06:52:23.253210 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 06:52:23.253220 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-16 06:52:23.253231 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-16 06:52:23.253242 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-16 06:52:23.253252 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 06:52:23.253263 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-16 06:52:23.253274 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-16 06:52:23.253284 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-16 06:52:23.253295 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-16 06:52:23.253305 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-16 06:52:23.253316 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-16 06:52:23.253327 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 06:52:23.253337 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-16 06:52:23.253348 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 06:52:23.253359 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-16 06:52:23.253369 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-16 06:52:23.253379 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-16 06:52:23.253390 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-16 06:52:23.253401 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-16 06:52:23.253411 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-16 06:52:23.253422 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-16 06:52:23.253439 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-16 06:52:23.253450 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-16 06:52:23.253463 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-16 06:52:23.253484 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-16 06:52:23.253515 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-16 06:52:23.253528 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-16 06:52:23.253540 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-16 06:52:23.253553 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-16 06:52:23.253565 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-16 06:52:23.253578 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-16 06:52:23.253590 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-16 06:52:23.253602 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-16 06:52:23.253614 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-16 06:52:23.253626 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-16 06:52:23.253638 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-16 06:52:23.253651 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-16 06:52:23.253663 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-16 06:52:23.253675 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-16 06:52:23.253687 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-16 06:52:23.253699 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-16 06:52:23.253712 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-16 06:52:23.253724 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-16 06:52:23.253738 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-16 06:52:23.253750 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-16 06:52:23.253762 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-16 06:52:23.253775 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-16 06:52:23.253787 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-16 06:52:23.253799 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-16 06:52:23.253810 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-16 06:52:23.253830 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-16 06:52:23.253842 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-16 06:52:23.253857 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-16 06:52:23.253869 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-16 06:52:23.253880 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-16 06:52:23.253890 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-16 06:52:23.253901 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-16 06:52:23.253918 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-16 06:52:23.253929 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-16 06:52:23.253940 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-16 06:52:23.253950 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-16 06:52:23.253961 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-16 06:52:23.253972 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-16 06:52:23.529433 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 06:52:23.529916 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-16 06:52:23.586436 | orchestrator | 2026-04-16 06:52:23.586529 | orchestrator | ## Containers @ testbed-node-1 2026-04-16 06:52:23.586548 | orchestrator | 2026-04-16 06:52:23.586559 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 06:52:23.586571 | orchestrator | + echo 2026-04-16 06:52:23.586582 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-16 06:52:23.586594 | orchestrator | + echo 2026-04-16 06:52:23.586605 | orchestrator | + osism container testbed-node-1 ps 2026-04-16 06:52:25.925518 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 06:52:25.925651 | orchestrator | a3a3d4b442af registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-16 06:52:25.925669 | orchestrator | 733aa0add466 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-16 06:52:25.925734 | orchestrator | 0e9d08983059 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-16 06:52:25.925749 | orchestrator | 7575c0459fcf registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-16 06:52:25.925764 | orchestrator | b61da1ad316b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-16 06:52:25.925775 | orchestrator | 99507deb4c02 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-04-16 06:52:25.925817 | orchestrator | 0badc5de6f30 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-16 06:52:25.925829 | orchestrator | 745992195e5f registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-16 06:52:25.925840 | orchestrator | 0647207b35d4 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-04-16 06:52:25.925851 | orchestrator | 0873c75b9bca registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-16 06:52:25.925862 | orchestrator | e133877b3fa4 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-16 06:52:25.925873 | orchestrator | f9bae017c9ae registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-16 06:52:25.925905 | orchestrator | 87e706483bb7 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-04-16 06:52:25.925916 | orchestrator | 5863d83b4c00 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-04-16 06:52:25.925927 | orchestrator | c550d46cf84d registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-04-16 06:52:25.925938 | orchestrator | 5d2f83607cea registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-04-16 06:52:25.925949 | orchestrator | 9a53807d6196 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-04-16 06:52:25.925960 | orchestrator | 29493b5cd3bb registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-04-16 06:52:25.926234 | orchestrator | c7e4879f84c1 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-04-16 06:52:25.926256 | orchestrator | 5a3bcffae060 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-04-16 06:52:25.926270 | orchestrator | 87e11b9e70df registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-04-16 06:52:25.926282 | orchestrator | 303ab51304a9 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes octavia_driver_agent 2026-04-16 06:52:25.926294 | orchestrator | c22e83cadc38 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-04-16 06:52:25.926318 | orchestrator | c04fd5da9d2d registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-16 06:52:25.926330 | orchestrator | a6257915700a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-16 06:52:25.926343 | orchestrator | a146940fcebb registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-16 06:52:25.926355 | orchestrator | 39d25ca7f6e6 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-04-16 06:52:25.926368 | orchestrator | cb60b4cfd19a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-04-16 06:52:25.926381 | orchestrator | 40e2de974f1c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-16 06:52:25.926393 | orchestrator | 13acfe6edd26 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-04-16 06:52:25.926406 | orchestrator | 2db992cb7544 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-04-16 06:52:25.926419 | orchestrator | 3301f35f21a9 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) barbican_api 2026-04-16 06:52:25.926431 | orchestrator | 2316cded8c81 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_backup 2026-04-16 06:52:25.926444 | orchestrator | a06697e661a2 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_volume 2026-04-16 06:52:25.926455 | orchestrator | a808d8ea2ec1 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-04-16 06:52:25.926465 | orchestrator | 63be5e3c8792 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-04-16 06:52:25.926476 | orchestrator | 8236f3332c7a registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) glance_api 2026-04-16 06:52:25.926494 | orchestrator | d16b48307cc4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_console 2026-04-16 06:52:25.926518 | orchestrator | d71e5eda265d registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-04-16 06:52:25.926530 | orchestrator | f0e633bc9139 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) horizon 2026-04-16 06:52:25.926541 | orchestrator | 804a5cf52555 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_novncproxy 2026-04-16 06:52:25.926559 | orchestrator | f47d17eda39c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_conductor 2026-04-16 06:52:25.926570 | orchestrator | bf85474e33fd registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_api 2026-04-16 06:52:25.926581 | orchestrator | bb824b4ecbe3 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-04-16 06:52:25.926592 | orchestrator | f061bdeeafad registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 46 minutes (healthy) neutron_server 2026-04-16 06:52:25.926602 | orchestrator | 1d8c7f38e11b registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) placement_api 2026-04-16 06:52:25.926613 | orchestrator | 364cbe243ad3 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone 2026-04-16 06:52:25.926624 | orchestrator | 3d69b0a93e92 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_fernet 2026-04-16 06:52:25.926635 | orchestrator | 059445f1f6e9 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_ssh 2026-04-16 06:52:25.926645 | orchestrator | d738d12a83c1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 54 minutes ago Up 54 minutes ceph-mgr-testbed-node-1 2026-04-16 06:52:25.926657 | orchestrator | adfcc35c5c10 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-16 06:52:25.926668 | orchestrator | deb83ba22d33 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-04-16 06:52:25.926679 | orchestrator | e6753c55e3ac registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-16 06:52:25.926690 | orchestrator | 69dc12e38de5 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-16 06:52:25.926700 | orchestrator | dab326cefe41 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-16 06:52:25.926711 | orchestrator | f47bb6e4c4fd registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-16 06:52:25.926722 | orchestrator | 5e44ea534807 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-16 06:52:25.926733 | orchestrator | 0a4d656acc5d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-16 06:52:25.926744 | orchestrator | 7c7cc4edaec9 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-16 06:52:25.926770 | orchestrator | 400bc05fad29 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-16 06:52:25.926782 | orchestrator | 4a7d6d37eed7 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-16 06:52:25.926792 | orchestrator | dc271ef63672 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-16 06:52:25.926803 | orchestrator | ea39f54aa0cf registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-16 06:52:25.926814 | orchestrator | 2e6b32b5e8f8 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-04-16 06:52:25.926825 | orchestrator | 6fdb465f4f84 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-04-16 06:52:25.926841 | orchestrator | 7d5a6979396b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-04-16 06:52:25.926852 | orchestrator | f0c69c05dcd7 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-04-16 06:52:25.926863 | orchestrator | 543d94e30fbe registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-04-16 06:52:25.926873 | orchestrator | 541823861a19 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-16 06:52:25.926889 | orchestrator | 01b7282780b8 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-16 06:52:25.926900 | orchestrator | 91e12a7edbb7 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-16 06:52:26.203873 | orchestrator | 2026-04-16 06:52:26.203999 | orchestrator | ## Images @ testbed-node-1 2026-04-16 06:52:26.204024 | orchestrator | 2026-04-16 06:52:26.204046 | orchestrator | + echo 2026-04-16 06:52:26.204068 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-16 06:52:26.204089 | orchestrator | + echo 2026-04-16 06:52:26.204108 | orchestrator | + osism container testbed-node-1 images 2026-04-16 06:52:28.498480 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 06:52:28.498599 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-16 06:52:28.498616 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-16 06:52:28.498629 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-16 06:52:28.498642 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-16 06:52:28.498653 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-16 06:52:28.499591 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 06:52:28.499631 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 06:52:28.499651 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-16 06:52:28.499672 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-16 06:52:28.499691 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-16 06:52:28.499710 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 06:52:28.499730 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-16 06:52:28.499750 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-16 06:52:28.499770 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-16 06:52:28.499790 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-16 06:52:28.499809 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-16 06:52:28.499829 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-16 06:52:28.499849 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 06:52:28.499868 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-16 06:52:28.499888 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 06:52:28.499907 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-16 06:52:28.499926 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-16 06:52:28.499946 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-16 06:52:28.499965 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-16 06:52:28.499985 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-16 06:52:28.500004 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-16 06:52:28.500023 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-16 06:52:28.500043 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-16 06:52:28.500062 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-16 06:52:28.500082 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-16 06:52:28.500101 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-16 06:52:28.500197 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-16 06:52:28.500239 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-16 06:52:28.500259 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-16 06:52:28.500279 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-16 06:52:28.500298 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-16 06:52:28.500316 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-16 06:52:28.500334 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-16 06:52:28.500352 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-16 06:52:28.500393 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-16 06:52:28.500412 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-16 06:52:28.500430 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-16 06:52:28.500449 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-16 06:52:28.500466 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-16 06:52:28.500485 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-16 06:52:28.500504 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-16 06:52:28.500522 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-16 06:52:28.500540 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-16 06:52:28.500558 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-16 06:52:28.500576 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-16 06:52:28.500594 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-16 06:52:28.500612 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-16 06:52:28.500631 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-16 06:52:28.500650 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-16 06:52:28.500668 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-16 06:52:28.500683 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-16 06:52:28.500694 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-16 06:52:28.500705 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-16 06:52:28.500726 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-16 06:52:28.500744 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-16 06:52:28.500755 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-16 06:52:28.500765 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-16 06:52:28.500776 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-16 06:52:28.500797 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-16 06:52:28.500808 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-16 06:52:28.500819 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-16 06:52:28.500830 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-16 06:52:28.500841 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-16 06:52:28.500851 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-16 06:52:28.768747 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 06:52:28.769089 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-16 06:52:28.811301 | orchestrator | 2026-04-16 06:52:28.811395 | orchestrator | ## Containers @ testbed-node-2 2026-04-16 06:52:28.811410 | orchestrator | 2026-04-16 06:52:28.811422 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 06:52:28.811433 | orchestrator | + echo 2026-04-16 06:52:28.811444 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-16 06:52:28.811455 | orchestrator | + echo 2026-04-16 06:52:28.811466 | orchestrator | + osism container testbed-node-2 ps 2026-04-16 06:52:31.225623 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 06:52:31.225743 | orchestrator | e6dd9fedbedf registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-04-16 06:52:31.225761 | orchestrator | 9af6537664eb registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-04-16 06:52:31.225773 | orchestrator | d9964c0f12e6 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-04-16 06:52:31.225784 | orchestrator | 65574d9c8239 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-04-16 06:52:31.225797 | orchestrator | 639f1b11ca7c registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-04-16 06:52:31.225808 | orchestrator | a4e7859f03fa registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-04-16 06:52:31.225819 | orchestrator | ac998880c1bc registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-04-16 06:52:31.225856 | orchestrator | e6de28c43dd8 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-04-16 06:52:31.225868 | orchestrator | f5a0a4812351 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_share 2026-04-16 06:52:31.225937 | orchestrator | 380f0237de50 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-04-16 06:52:31.225950 | orchestrator | cd1cd4789adc registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-04-16 06:52:31.225961 | orchestrator | d6836008d2b9 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-04-16 06:52:31.225972 | orchestrator | 3e305d417f2f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-04-16 06:52:31.225983 | orchestrator | 91f5d9308cd8 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-04-16 06:52:31.226010 | orchestrator | 55e450df66ed registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-04-16 06:52:31.226078 | orchestrator | 7204cc760486 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_api 2026-04-16 06:52:31.226090 | orchestrator | 4ec746f5bcfb registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-04-16 06:52:31.226101 | orchestrator | 335e1435e8e2 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-04-16 06:52:31.226112 | orchestrator | b1db9ee3744e registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_worker 2026-04-16 06:52:31.226234 | orchestrator | f6ea7658c588 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_housekeeping 2026-04-16 06:52:31.226251 | orchestrator | f2fc2e6df68b registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) octavia_health_manager 2026-04-16 06:52:31.226264 | orchestrator | b369e8b7edb9 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-04-16 06:52:31.226276 | orchestrator | 720e7a10c72d registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-04-16 06:52:31.226288 | orchestrator | d6ee4818c500 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_worker 2026-04-16 06:52:31.226300 | orchestrator | 4769c5e75c64 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_mdns 2026-04-16 06:52:31.226324 | orchestrator | 83fbbeba8737 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_producer 2026-04-16 06:52:31.226336 | orchestrator | d5f5165a8a75 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-04-16 06:52:31.226348 | orchestrator | 2353fb0a5e1e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 25 minutes (healthy) designate_api 2026-04-16 06:52:31.226360 | orchestrator | 5a7574f74886 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-04-16 06:52:31.226372 | orchestrator | 3f356464fb50 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-04-16 06:52:31.226384 | orchestrator | 21177b7322e0 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-04-16 06:52:31.226396 | orchestrator | 37780c0d4bf1 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-04-16 06:52:31.226416 | orchestrator | 814ca26ba5d4 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_backup 2026-04-16 06:52:31.226430 | orchestrator | fc3d6eb94a2b registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) cinder_volume 2026-04-16 06:52:31.226442 | orchestrator | e5a68524ede0 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_scheduler 2026-04-16 06:52:31.226454 | orchestrator | 042ae3a276ff registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_api 2026-04-16 06:52:31.226466 | orchestrator | d05e8086253f registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) glance_api 2026-04-16 06:52:31.226478 | orchestrator | 1e08c9e3cdf1 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) skyline_console 2026-04-16 06:52:31.226491 | orchestrator | 3327bb37af8a registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 35 minutes ago Up 35 minutes (healthy) skyline_apiserver 2026-04-16 06:52:31.226512 | orchestrator | 80c2acee9f7f registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) horizon 2026-04-16 06:52:31.226524 | orchestrator | 5abab4066f95 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_novncproxy 2026-04-16 06:52:31.226535 | orchestrator | eb51de762623 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 40 minutes ago Up 40 minutes (healthy) nova_conductor 2026-04-16 06:52:31.226552 | orchestrator | 44a375e32502 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 41 minutes (healthy) nova_api 2026-04-16 06:52:31.226563 | orchestrator | 24b74d0fe988 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_scheduler 2026-04-16 06:52:31.226574 | orchestrator | 1b05161fd378 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 47 minutes ago Up 47 minutes (healthy) neutron_server 2026-04-16 06:52:31.226585 | orchestrator | c70ef5d6434c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) placement_api 2026-04-16 06:52:31.226596 | orchestrator | 9d74243841c6 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone 2026-04-16 06:52:31.226607 | orchestrator | 5e47521f3110 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_fernet 2026-04-16 06:52:31.226617 | orchestrator | 3e47b65cd559 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) keystone_ssh 2026-04-16 06:52:31.226628 | orchestrator | 88c9df8578fb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 54 minutes ago Up 54 minutes ceph-mgr-testbed-node-2 2026-04-16 06:52:31.226639 | orchestrator | b1a17eff9161 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-16 06:52:31.226650 | orchestrator | 8eb997055eb5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-04-16 06:52:31.226661 | orchestrator | 6203f9cb5dd1 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-04-16 06:52:31.226672 | orchestrator | 9c74eccc2e67 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-04-16 06:52:31.226688 | orchestrator | 07415bdfef50 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-04-16 06:52:31.226699 | orchestrator | f8bc3b34a1e7 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-04-16 06:52:31.226710 | orchestrator | 81f9e1a5e9d1 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-04-16 06:52:31.226721 | orchestrator | 45b63abf294b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-04-16 06:52:31.226731 | orchestrator | 38da310ab170 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-04-16 06:52:31.226749 | orchestrator | 7daeb1601f71 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-04-16 06:52:31.226766 | orchestrator | f190bee8cc44 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-04-16 06:52:31.226777 | orchestrator | 2e6b565d1e5d registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-04-16 06:52:31.226788 | orchestrator | 2108c37b3258 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-04-16 06:52:31.226799 | orchestrator | 75340418a21a registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-04-16 06:52:31.226810 | orchestrator | d0a97c954298 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-04-16 06:52:31.226820 | orchestrator | 897fb14970b2 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-04-16 06:52:31.226831 | orchestrator | a6075612c3c3 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-04-16 06:52:31.226842 | orchestrator | 788528eeffe2 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-04-16 06:52:31.226853 | orchestrator | 88266b9a6675 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-04-16 06:52:31.226864 | orchestrator | 62ca5e8e8675 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-04-16 06:52:31.226875 | orchestrator | 2a244c4f14b5 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-04-16 06:52:31.522968 | orchestrator | 2026-04-16 06:52:31.523066 | orchestrator | ## Images @ testbed-node-2 2026-04-16 06:52:31.523081 | orchestrator | 2026-04-16 06:52:31.523091 | orchestrator | + echo 2026-04-16 06:52:31.523102 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-16 06:52:31.523113 | orchestrator | + echo 2026-04-16 06:52:31.523122 | orchestrator | + osism container testbed-node-2 images 2026-04-16 06:52:33.926509 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 06:52:33.926595 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-16 06:52:33.926631 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-16 06:52:33.926638 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-16 06:52:33.926645 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-16 06:52:33.926651 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-16 06:52:33.926658 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 06:52:33.926664 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 06:52:33.926688 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-16 06:52:33.926696 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-16 06:52:33.926703 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-16 06:52:33.926714 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 06:52:33.926721 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-16 06:52:33.926729 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-16 06:52:33.926736 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-16 06:52:33.926743 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-16 06:52:33.926750 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-16 06:52:33.926757 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-16 06:52:33.926765 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 06:52:33.926772 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-16 06:52:33.926779 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 06:52:33.926786 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-16 06:52:33.926793 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-16 06:52:33.926801 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-16 06:52:33.926808 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-16 06:52:33.926815 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-16 06:52:33.926822 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-16 06:52:33.926829 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-16 06:52:33.926836 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-16 06:52:33.926843 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-16 06:52:33.926850 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-16 06:52:33.926858 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-16 06:52:33.926879 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-16 06:52:33.926886 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-16 06:52:33.926894 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-16 06:52:33.926907 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-16 06:52:33.926914 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-16 06:52:33.926921 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-16 06:52:33.926928 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-16 06:52:33.926935 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-16 06:52:33.926943 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-16 06:52:33.926950 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-16 06:52:33.926957 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-16 06:52:33.926964 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-16 06:52:33.926979 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-16 06:52:33.926987 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-16 06:52:33.926994 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-16 06:52:33.927001 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-16 06:52:33.927008 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-16 06:52:33.927015 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-16 06:52:33.927023 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-16 06:52:33.927030 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-16 06:52:33.927037 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-16 06:52:33.927044 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-16 06:52:33.927051 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-16 06:52:33.927058 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-16 06:52:33.927065 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-16 06:52:33.927072 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-16 06:52:33.927079 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-16 06:52:33.927087 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-16 06:52:33.927095 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-16 06:52:33.927108 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-16 06:52:33.927116 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-16 06:52:33.927124 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-16 06:52:33.927137 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-16 06:52:33.927179 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-16 06:52:33.927187 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-16 06:52:33.927200 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-16 06:52:33.927208 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-16 06:52:33.927217 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-16 06:52:34.215702 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-16 06:52:34.221624 | orchestrator | + set -e 2026-04-16 06:52:34.221701 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 06:52:34.221714 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 06:52:34.221723 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 06:52:34.221732 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 06:52:34.221740 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 06:52:34.221750 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 06:52:34.221759 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 06:52:34.221768 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 06:52:34.221777 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 06:52:34.221786 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 06:52:34.221794 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 06:52:34.221803 | orchestrator | ++ export ARA=false 2026-04-16 06:52:34.221812 | orchestrator | ++ ARA=false 2026-04-16 06:52:34.221820 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 06:52:34.221829 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 06:52:34.221837 | orchestrator | ++ export TEMPEST=false 2026-04-16 06:52:34.221846 | orchestrator | ++ TEMPEST=false 2026-04-16 06:52:34.221855 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 06:52:34.221863 | orchestrator | ++ IS_ZUUL=true 2026-04-16 06:52:34.221872 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 06:52:34.221881 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 06:52:34.221889 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 06:52:34.221898 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 06:52:34.221906 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 06:52:34.221915 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 06:52:34.221924 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 06:52:34.221933 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 06:52:34.221942 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 06:52:34.221950 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 06:52:34.221959 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 06:52:34.221968 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-16 06:52:34.232430 | orchestrator | + set -e 2026-04-16 06:52:34.233381 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 06:52:34.233439 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 06:52:34.233461 | orchestrator | ++ INTERACTIVE=false 2026-04-16 06:52:34.233477 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 06:52:34.233495 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 06:52:34.233513 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 06:52:34.234264 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 06:52:34.240474 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 06:52:34.240553 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 06:52:34.240593 | orchestrator | + echo 2026-04-16 06:52:34.240605 | orchestrator | 2026-04-16 06:52:34.240618 | orchestrator | + echo '# Ceph status' 2026-04-16 06:52:34.240629 | orchestrator | # Ceph status 2026-04-16 06:52:34.240640 | orchestrator | 2026-04-16 06:52:34.240651 | orchestrator | + echo 2026-04-16 06:52:34.240854 | orchestrator | + ceph -s 2026-04-16 06:52:34.830306 | orchestrator | cluster: 2026-04-16 06:52:34.830412 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-16 06:52:34.830429 | orchestrator | health: HEALTH_OK 2026-04-16 06:52:34.830441 | orchestrator | 2026-04-16 06:52:34.830453 | orchestrator | services: 2026-04-16 06:52:34.830464 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 66m) 2026-04-16 06:52:34.830477 | orchestrator | mgr: testbed-node-1(active, since 54m), standbys: testbed-node-0, testbed-node-2 2026-04-16 06:52:34.830489 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-16 06:52:34.830500 | orchestrator | osd: 6 osds: 6 up (since 62m), 6 in (since 63m) 2026-04-16 06:52:34.830511 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-16 06:52:34.830522 | orchestrator | 2026-04-16 06:52:34.830533 | orchestrator | data: 2026-04-16 06:52:34.830543 | orchestrator | volumes: 1/1 healthy 2026-04-16 06:52:34.830554 | orchestrator | pools: 14 pools, 401 pgs 2026-04-16 06:52:34.830565 | orchestrator | objects: 556 objects, 2.2 GiB 2026-04-16 06:52:34.830576 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-04-16 06:52:34.830587 | orchestrator | pgs: 401 active+clean 2026-04-16 06:52:34.830598 | orchestrator | 2026-04-16 06:52:34.875267 | orchestrator | 2026-04-16 06:52:34.875362 | orchestrator | # Ceph versions 2026-04-16 06:52:34.875377 | orchestrator | 2026-04-16 06:52:34.875389 | orchestrator | + echo 2026-04-16 06:52:34.875400 | orchestrator | + echo '# Ceph versions' 2026-04-16 06:52:34.875412 | orchestrator | + echo 2026-04-16 06:52:34.875423 | orchestrator | + ceph versions 2026-04-16 06:52:35.463264 | orchestrator | { 2026-04-16 06:52:35.463355 | orchestrator | "mon": { 2026-04-16 06:52:35.463368 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 06:52:35.463378 | orchestrator | }, 2026-04-16 06:52:35.463388 | orchestrator | "mgr": { 2026-04-16 06:52:35.463396 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 06:52:35.463405 | orchestrator | }, 2026-04-16 06:52:35.463413 | orchestrator | "osd": { 2026-04-16 06:52:35.463422 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-16 06:52:35.463431 | orchestrator | }, 2026-04-16 06:52:35.463439 | orchestrator | "mds": { 2026-04-16 06:52:35.463448 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 06:52:35.463456 | orchestrator | }, 2026-04-16 06:52:35.463465 | orchestrator | "rgw": { 2026-04-16 06:52:35.463473 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 06:52:35.463482 | orchestrator | }, 2026-04-16 06:52:35.463490 | orchestrator | "overall": { 2026-04-16 06:52:35.463499 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-16 06:52:35.463508 | orchestrator | } 2026-04-16 06:52:35.463516 | orchestrator | } 2026-04-16 06:52:35.508712 | orchestrator | 2026-04-16 06:52:35.508806 | orchestrator | # Ceph OSD tree 2026-04-16 06:52:35.508820 | orchestrator | 2026-04-16 06:52:35.508832 | orchestrator | + echo 2026-04-16 06:52:35.508842 | orchestrator | + echo '# Ceph OSD tree' 2026-04-16 06:52:35.508854 | orchestrator | + echo 2026-04-16 06:52:35.508865 | orchestrator | + ceph osd df tree 2026-04-16 06:52:36.005336 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-16 06:52:36.005439 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 446 MiB 113 GiB 5.93 1.00 - root default 2026-04-16 06:52:36.005453 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 155 MiB 38 GiB 5.95 1.00 - host testbed-node-3 2026-04-16 06:52:36.005466 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 78 MiB 19 GiB 5.48 0.92 174 up osd.0 2026-04-16 06:52:36.005477 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 78 MiB 19 GiB 6.42 1.08 218 up osd.3 2026-04-16 06:52:36.005488 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 135 MiB 38 GiB 5.90 0.99 - host testbed-node-4 2026-04-16 06:52:36.005528 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 7.10 1.20 195 up osd.2 2026-04-16 06:52:36.005539 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 960 MiB 899 MiB 1 KiB 62 MiB 19 GiB 4.69 0.79 195 up osd.4 2026-04-16 06:52:36.005550 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 155 MiB 38 GiB 5.95 1.00 - host testbed-node-5 2026-04-16 06:52:36.005562 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 82 MiB 19 GiB 7.26 1.22 197 up osd.1 2026-04-16 06:52:36.005573 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 949 MiB 875 MiB 1 KiB 74 MiB 19 GiB 4.64 0.78 191 up osd.5 2026-04-16 06:52:36.005584 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 446 MiB 113 GiB 5.93 2026-04-16 06:52:36.005595 | orchestrator | MIN/MAX VAR: 0.78/1.22 STDDEV: 1.06 2026-04-16 06:52:36.049442 | orchestrator | 2026-04-16 06:52:36.049548 | orchestrator | # Ceph monitor status 2026-04-16 06:52:36.049564 | orchestrator | 2026-04-16 06:52:36.049576 | orchestrator | + echo 2026-04-16 06:52:36.049587 | orchestrator | + echo '# Ceph monitor status' 2026-04-16 06:52:36.049598 | orchestrator | + echo 2026-04-16 06:52:36.049609 | orchestrator | + ceph mon stat 2026-04-16 06:52:36.616064 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-16 06:52:36.663861 | orchestrator | 2026-04-16 06:52:36.663956 | orchestrator | # Ceph quorum status 2026-04-16 06:52:36.663971 | orchestrator | 2026-04-16 06:52:36.663983 | orchestrator | + echo 2026-04-16 06:52:36.663994 | orchestrator | + echo '# Ceph quorum status' 2026-04-16 06:52:36.664005 | orchestrator | + echo 2026-04-16 06:52:36.664361 | orchestrator | + ceph quorum_status 2026-04-16 06:52:36.664665 | orchestrator | + jq 2026-04-16 06:52:37.281274 | orchestrator | { 2026-04-16 06:52:37.281358 | orchestrator | "election_epoch": 8, 2026-04-16 06:52:37.281368 | orchestrator | "quorum": [ 2026-04-16 06:52:37.281373 | orchestrator | 0, 2026-04-16 06:52:37.281378 | orchestrator | 1, 2026-04-16 06:52:37.281382 | orchestrator | 2 2026-04-16 06:52:37.281386 | orchestrator | ], 2026-04-16 06:52:37.281391 | orchestrator | "quorum_names": [ 2026-04-16 06:52:37.281395 | orchestrator | "testbed-node-0", 2026-04-16 06:52:37.281399 | orchestrator | "testbed-node-1", 2026-04-16 06:52:37.281403 | orchestrator | "testbed-node-2" 2026-04-16 06:52:37.281408 | orchestrator | ], 2026-04-16 06:52:37.281412 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-16 06:52:37.281417 | orchestrator | "quorum_age": 3990, 2026-04-16 06:52:37.281421 | orchestrator | "features": { 2026-04-16 06:52:37.281425 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-16 06:52:37.281429 | orchestrator | "quorum_mon": [ 2026-04-16 06:52:37.281434 | orchestrator | "kraken", 2026-04-16 06:52:37.281438 | orchestrator | "luminous", 2026-04-16 06:52:37.281442 | orchestrator | "mimic", 2026-04-16 06:52:37.281446 | orchestrator | "osdmap-prune", 2026-04-16 06:52:37.281450 | orchestrator | "nautilus", 2026-04-16 06:52:37.281454 | orchestrator | "octopus", 2026-04-16 06:52:37.281458 | orchestrator | "pacific", 2026-04-16 06:52:37.281462 | orchestrator | "elector-pinging", 2026-04-16 06:52:37.281466 | orchestrator | "quincy", 2026-04-16 06:52:37.281470 | orchestrator | "reef" 2026-04-16 06:52:37.281474 | orchestrator | ] 2026-04-16 06:52:37.281478 | orchestrator | }, 2026-04-16 06:52:37.281482 | orchestrator | "monmap": { 2026-04-16 06:52:37.281486 | orchestrator | "epoch": 1, 2026-04-16 06:52:37.281490 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-16 06:52:37.281495 | orchestrator | "modified": "2026-04-16T05:45:49.118505Z", 2026-04-16 06:52:37.281500 | orchestrator | "created": "2026-04-16T05:45:49.118505Z", 2026-04-16 06:52:37.281504 | orchestrator | "min_mon_release": 18, 2026-04-16 06:52:37.281508 | orchestrator | "min_mon_release_name": "reef", 2026-04-16 06:52:37.281512 | orchestrator | "election_strategy": 1, 2026-04-16 06:52:37.281516 | orchestrator | "disallowed_leaders: ": "", 2026-04-16 06:52:37.281520 | orchestrator | "stretch_mode": false, 2026-04-16 06:52:37.281524 | orchestrator | "tiebreaker_mon": "", 2026-04-16 06:52:37.281544 | orchestrator | "removed_ranks: ": "", 2026-04-16 06:52:37.281549 | orchestrator | "features": { 2026-04-16 06:52:37.281553 | orchestrator | "persistent": [ 2026-04-16 06:52:37.281557 | orchestrator | "kraken", 2026-04-16 06:52:37.281561 | orchestrator | "luminous", 2026-04-16 06:52:37.281565 | orchestrator | "mimic", 2026-04-16 06:52:37.281569 | orchestrator | "osdmap-prune", 2026-04-16 06:52:37.281573 | orchestrator | "nautilus", 2026-04-16 06:52:37.281577 | orchestrator | "octopus", 2026-04-16 06:52:37.281581 | orchestrator | "pacific", 2026-04-16 06:52:37.281585 | orchestrator | "elector-pinging", 2026-04-16 06:52:37.281589 | orchestrator | "quincy", 2026-04-16 06:52:37.281593 | orchestrator | "reef" 2026-04-16 06:52:37.281597 | orchestrator | ], 2026-04-16 06:52:37.281601 | orchestrator | "optional": [] 2026-04-16 06:52:37.281605 | orchestrator | }, 2026-04-16 06:52:37.281609 | orchestrator | "mons": [ 2026-04-16 06:52:37.281613 | orchestrator | { 2026-04-16 06:52:37.281617 | orchestrator | "rank": 0, 2026-04-16 06:52:37.281621 | orchestrator | "name": "testbed-node-0", 2026-04-16 06:52:37.281625 | orchestrator | "public_addrs": { 2026-04-16 06:52:37.281629 | orchestrator | "addrvec": [ 2026-04-16 06:52:37.281633 | orchestrator | { 2026-04-16 06:52:37.281637 | orchestrator | "type": "v2", 2026-04-16 06:52:37.281642 | orchestrator | "addr": "192.168.16.8:3300", 2026-04-16 06:52:37.281646 | orchestrator | "nonce": 0 2026-04-16 06:52:37.281650 | orchestrator | }, 2026-04-16 06:52:37.281654 | orchestrator | { 2026-04-16 06:52:37.281658 | orchestrator | "type": "v1", 2026-04-16 06:52:37.281662 | orchestrator | "addr": "192.168.16.8:6789", 2026-04-16 06:52:37.281666 | orchestrator | "nonce": 0 2026-04-16 06:52:37.281670 | orchestrator | } 2026-04-16 06:52:37.281674 | orchestrator | ] 2026-04-16 06:52:37.281678 | orchestrator | }, 2026-04-16 06:52:37.281682 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-04-16 06:52:37.281686 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-04-16 06:52:37.281691 | orchestrator | "priority": 0, 2026-04-16 06:52:37.281695 | orchestrator | "weight": 0, 2026-04-16 06:52:37.281699 | orchestrator | "crush_location": "{}" 2026-04-16 06:52:37.281703 | orchestrator | }, 2026-04-16 06:52:37.281707 | orchestrator | { 2026-04-16 06:52:37.281711 | orchestrator | "rank": 1, 2026-04-16 06:52:37.281715 | orchestrator | "name": "testbed-node-1", 2026-04-16 06:52:37.281719 | orchestrator | "public_addrs": { 2026-04-16 06:52:37.281723 | orchestrator | "addrvec": [ 2026-04-16 06:52:37.281727 | orchestrator | { 2026-04-16 06:52:37.281731 | orchestrator | "type": "v2", 2026-04-16 06:52:37.281749 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-16 06:52:37.281753 | orchestrator | "nonce": 0 2026-04-16 06:52:37.281757 | orchestrator | }, 2026-04-16 06:52:37.281761 | orchestrator | { 2026-04-16 06:52:37.281766 | orchestrator | "type": "v1", 2026-04-16 06:52:37.281770 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-16 06:52:37.281774 | orchestrator | "nonce": 0 2026-04-16 06:52:37.281778 | orchestrator | } 2026-04-16 06:52:37.281782 | orchestrator | ] 2026-04-16 06:52:37.281786 | orchestrator | }, 2026-04-16 06:52:37.281790 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-16 06:52:37.281794 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-16 06:52:37.281798 | orchestrator | "priority": 0, 2026-04-16 06:52:37.281802 | orchestrator | "weight": 0, 2026-04-16 06:52:37.281806 | orchestrator | "crush_location": "{}" 2026-04-16 06:52:37.281810 | orchestrator | }, 2026-04-16 06:52:37.281814 | orchestrator | { 2026-04-16 06:52:37.281818 | orchestrator | "rank": 2, 2026-04-16 06:52:37.281823 | orchestrator | "name": "testbed-node-2", 2026-04-16 06:52:37.281829 | orchestrator | "public_addrs": { 2026-04-16 06:52:37.281836 | orchestrator | "addrvec": [ 2026-04-16 06:52:37.281843 | orchestrator | { 2026-04-16 06:52:37.281849 | orchestrator | "type": "v2", 2026-04-16 06:52:37.281856 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-16 06:52:37.281863 | orchestrator | "nonce": 0 2026-04-16 06:52:37.281871 | orchestrator | }, 2026-04-16 06:52:37.281878 | orchestrator | { 2026-04-16 06:52:37.281885 | orchestrator | "type": "v1", 2026-04-16 06:52:37.281893 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-16 06:52:37.281899 | orchestrator | "nonce": 0 2026-04-16 06:52:37.281904 | orchestrator | } 2026-04-16 06:52:37.281909 | orchestrator | ] 2026-04-16 06:52:37.281913 | orchestrator | }, 2026-04-16 06:52:37.281957 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-16 06:52:37.281962 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-16 06:52:37.281967 | orchestrator | "priority": 0, 2026-04-16 06:52:37.281971 | orchestrator | "weight": 0, 2026-04-16 06:52:37.281976 | orchestrator | "crush_location": "{}" 2026-04-16 06:52:37.281980 | orchestrator | } 2026-04-16 06:52:37.281985 | orchestrator | ] 2026-04-16 06:52:37.281989 | orchestrator | } 2026-04-16 06:52:37.281994 | orchestrator | } 2026-04-16 06:52:37.281998 | orchestrator | 2026-04-16 06:52:37.282003 | orchestrator | # Ceph free space status 2026-04-16 06:52:37.282007 | orchestrator | 2026-04-16 06:52:37.282012 | orchestrator | + echo 2026-04-16 06:52:37.282057 | orchestrator | + echo '# Ceph free space status' 2026-04-16 06:52:37.282062 | orchestrator | + echo 2026-04-16 06:52:37.282067 | orchestrator | + ceph df 2026-04-16 06:52:37.885032 | orchestrator | --- RAW STORAGE --- 2026-04-16 06:52:37.885168 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-16 06:52:37.885198 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.93 2026-04-16 06:52:37.885210 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.93 2026-04-16 06:52:37.885221 | orchestrator | 2026-04-16 06:52:37.885232 | orchestrator | --- POOLS --- 2026-04-16 06:52:37.885243 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-16 06:52:37.885256 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-04-16 06:52:37.885266 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-16 06:52:37.885280 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-04-16 06:52:37.885298 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-16 06:52:37.885315 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-16 06:52:37.885335 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-16 06:52:37.885353 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-16 06:52:37.885372 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-16 06:52:37.885392 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-04-16 06:52:37.885411 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 06:52:37.885429 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 06:52:37.885440 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.95 35 GiB 2026-04-16 06:52:37.885451 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 06:52:37.885462 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 06:52:37.933596 | orchestrator | ++ semver 9.5.0 5.0.0 2026-04-16 06:52:37.983555 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 06:52:37.983648 | orchestrator | + osism apply facts 2026-04-16 06:52:50.070903 | orchestrator | 2026-04-16 06:52:50 | INFO  | Task 4a120ae3-1b48-43e7-860c-2212498e965b (facts) was prepared for execution. 2026-04-16 06:52:50.071014 | orchestrator | 2026-04-16 06:52:50 | INFO  | It takes a moment until task 4a120ae3-1b48-43e7-860c-2212498e965b (facts) has been started and output is visible here. 2026-04-16 06:53:03.769277 | orchestrator | 2026-04-16 06:53:03.769383 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-16 06:53:03.769396 | orchestrator | 2026-04-16 06:53:03.769407 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-16 06:53:03.769416 | orchestrator | Thursday 16 April 2026 06:52:54 +0000 (0:00:00.278) 0:00:00.278 ******** 2026-04-16 06:53:03.769425 | orchestrator | ok: [testbed-manager] 2026-04-16 06:53:03.769435 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:03.769443 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:03.769452 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:03.769461 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:53:03.769469 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:53:03.769478 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:53:03.769506 | orchestrator | 2026-04-16 06:53:03.769516 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-16 06:53:03.769524 | orchestrator | Thursday 16 April 2026 06:52:55 +0000 (0:00:01.218) 0:00:01.497 ******** 2026-04-16 06:53:03.769533 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:53:03.769542 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:03.769550 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:53:03.769559 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:53:03.769567 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:53:03.769576 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:53:03.769585 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:53:03.769593 | orchestrator | 2026-04-16 06:53:03.769602 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 06:53:03.769610 | orchestrator | 2026-04-16 06:53:03.769619 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 06:53:03.769627 | orchestrator | Thursday 16 April 2026 06:52:57 +0000 (0:00:01.337) 0:00:02.834 ******** 2026-04-16 06:53:03.769636 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:03.769644 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:03.769653 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:03.769661 | orchestrator | ok: [testbed-manager] 2026-04-16 06:53:03.769670 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:53:03.769678 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:53:03.769687 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:53:03.769695 | orchestrator | 2026-04-16 06:53:03.769704 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-16 06:53:03.769712 | orchestrator | 2026-04-16 06:53:03.769721 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-16 06:53:03.769730 | orchestrator | Thursday 16 April 2026 06:53:02 +0000 (0:00:05.738) 0:00:08.573 ******** 2026-04-16 06:53:03.769739 | orchestrator | skipping: [testbed-manager] 2026-04-16 06:53:03.769747 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:03.769756 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:53:03.769764 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:53:03.769773 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:53:03.769781 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:53:03.769791 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:53:03.769801 | orchestrator | 2026-04-16 06:53:03.769811 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:53:03.769821 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769832 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769856 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769867 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769877 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769887 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769897 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:03.769907 | orchestrator | 2026-04-16 06:53:03.769917 | orchestrator | 2026-04-16 06:53:03.769927 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:53:03.769937 | orchestrator | Thursday 16 April 2026 06:53:03 +0000 (0:00:00.604) 0:00:09.178 ******** 2026-04-16 06:53:03.769947 | orchestrator | =============================================================================== 2026-04-16 06:53:03.769963 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.74s 2026-04-16 06:53:03.769974 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-04-16 06:53:03.769985 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.22s 2026-04-16 06:53:03.769995 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-04-16 06:53:04.063484 | orchestrator | + osism validate ceph-mons 2026-04-16 06:53:36.214366 | orchestrator | 2026-04-16 06:53:36.214493 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-16 06:53:36.214512 | orchestrator | 2026-04-16 06:53:36.214524 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-16 06:53:36.214536 | orchestrator | Thursday 16 April 2026 06:53:20 +0000 (0:00:00.440) 0:00:00.440 ******** 2026-04-16 06:53:36.214548 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:36.214560 | orchestrator | 2026-04-16 06:53:36.214571 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-16 06:53:36.214582 | orchestrator | Thursday 16 April 2026 06:53:21 +0000 (0:00:00.922) 0:00:01.362 ******** 2026-04-16 06:53:36.214593 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:36.214604 | orchestrator | 2026-04-16 06:53:36.214615 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-16 06:53:36.214626 | orchestrator | Thursday 16 April 2026 06:53:22 +0000 (0:00:01.003) 0:00:02.365 ******** 2026-04-16 06:53:36.214637 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.214649 | orchestrator | 2026-04-16 06:53:36.214660 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-16 06:53:36.214671 | orchestrator | Thursday 16 April 2026 06:53:22 +0000 (0:00:00.122) 0:00:02.487 ******** 2026-04-16 06:53:36.214682 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.214693 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:36.214704 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:36.214714 | orchestrator | 2026-04-16 06:53:36.214725 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-16 06:53:36.214736 | orchestrator | Thursday 16 April 2026 06:53:22 +0000 (0:00:00.326) 0:00:02.814 ******** 2026-04-16 06:53:36.214747 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:36.214758 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:36.214768 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.214779 | orchestrator | 2026-04-16 06:53:36.214790 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-16 06:53:36.214801 | orchestrator | Thursday 16 April 2026 06:53:23 +0000 (0:00:01.001) 0:00:03.815 ******** 2026-04-16 06:53:36.214814 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.214827 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:53:36.214839 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:53:36.214852 | orchestrator | 2026-04-16 06:53:36.214864 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-16 06:53:36.214876 | orchestrator | Thursday 16 April 2026 06:53:24 +0000 (0:00:00.299) 0:00:04.114 ******** 2026-04-16 06:53:36.214888 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.214901 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:36.214913 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:36.214925 | orchestrator | 2026-04-16 06:53:36.214938 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:53:36.214950 | orchestrator | Thursday 16 April 2026 06:53:24 +0000 (0:00:00.503) 0:00:04.618 ******** 2026-04-16 06:53:36.214963 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.214975 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:36.214987 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:36.214999 | orchestrator | 2026-04-16 06:53:36.215011 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-16 06:53:36.215023 | orchestrator | Thursday 16 April 2026 06:53:25 +0000 (0:00:00.351) 0:00:04.969 ******** 2026-04-16 06:53:36.215060 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215073 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:53:36.215085 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:53:36.215097 | orchestrator | 2026-04-16 06:53:36.215109 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-16 06:53:36.215122 | orchestrator | Thursday 16 April 2026 06:53:25 +0000 (0:00:00.312) 0:00:05.282 ******** 2026-04-16 06:53:36.215134 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.215147 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:53:36.215158 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:53:36.215195 | orchestrator | 2026-04-16 06:53:36.215207 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 06:53:36.215218 | orchestrator | Thursday 16 April 2026 06:53:25 +0000 (0:00:00.507) 0:00:05.790 ******** 2026-04-16 06:53:36.215229 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215240 | orchestrator | 2026-04-16 06:53:36.215251 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 06:53:36.215262 | orchestrator | Thursday 16 April 2026 06:53:26 +0000 (0:00:00.252) 0:00:06.042 ******** 2026-04-16 06:53:36.215278 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215297 | orchestrator | 2026-04-16 06:53:36.215324 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 06:53:36.215347 | orchestrator | Thursday 16 April 2026 06:53:26 +0000 (0:00:00.264) 0:00:06.307 ******** 2026-04-16 06:53:36.215366 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215384 | orchestrator | 2026-04-16 06:53:36.215403 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:53:36.215423 | orchestrator | Thursday 16 April 2026 06:53:26 +0000 (0:00:00.257) 0:00:06.564 ******** 2026-04-16 06:53:36.215443 | orchestrator | 2026-04-16 06:53:36.215461 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:53:36.215481 | orchestrator | Thursday 16 April 2026 06:53:26 +0000 (0:00:00.070) 0:00:06.634 ******** 2026-04-16 06:53:36.215501 | orchestrator | 2026-04-16 06:53:36.215521 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:53:36.215559 | orchestrator | Thursday 16 April 2026 06:53:26 +0000 (0:00:00.074) 0:00:06.708 ******** 2026-04-16 06:53:36.215572 | orchestrator | 2026-04-16 06:53:36.215583 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 06:53:36.215605 | orchestrator | Thursday 16 April 2026 06:53:26 +0000 (0:00:00.072) 0:00:06.781 ******** 2026-04-16 06:53:36.215616 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215626 | orchestrator | 2026-04-16 06:53:36.215637 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-16 06:53:36.215648 | orchestrator | Thursday 16 April 2026 06:53:27 +0000 (0:00:00.255) 0:00:07.037 ******** 2026-04-16 06:53:36.215659 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215670 | orchestrator | 2026-04-16 06:53:36.215702 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-16 06:53:36.215714 | orchestrator | Thursday 16 April 2026 06:53:27 +0000 (0:00:00.244) 0:00:07.282 ******** 2026-04-16 06:53:36.215724 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.215735 | orchestrator | 2026-04-16 06:53:36.215746 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-16 06:53:36.215757 | orchestrator | Thursday 16 April 2026 06:53:27 +0000 (0:00:00.129) 0:00:07.412 ******** 2026-04-16 06:53:36.215768 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:53:36.215778 | orchestrator | 2026-04-16 06:53:36.215795 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-16 06:53:36.215806 | orchestrator | Thursday 16 April 2026 06:53:29 +0000 (0:00:01.541) 0:00:08.953 ******** 2026-04-16 06:53:36.215816 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.215827 | orchestrator | 2026-04-16 06:53:36.215838 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-16 06:53:36.215861 | orchestrator | Thursday 16 April 2026 06:53:29 +0000 (0:00:00.519) 0:00:09.472 ******** 2026-04-16 06:53:36.215872 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.215883 | orchestrator | 2026-04-16 06:53:36.215913 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-16 06:53:36.215924 | orchestrator | Thursday 16 April 2026 06:53:29 +0000 (0:00:00.135) 0:00:09.608 ******** 2026-04-16 06:53:36.215935 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.215946 | orchestrator | 2026-04-16 06:53:36.215956 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-16 06:53:36.215967 | orchestrator | Thursday 16 April 2026 06:53:30 +0000 (0:00:00.345) 0:00:09.953 ******** 2026-04-16 06:53:36.215978 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.215988 | orchestrator | 2026-04-16 06:53:36.215999 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-16 06:53:36.216010 | orchestrator | Thursday 16 April 2026 06:53:30 +0000 (0:00:00.335) 0:00:10.289 ******** 2026-04-16 06:53:36.216020 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.216031 | orchestrator | 2026-04-16 06:53:36.216092 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-16 06:53:36.216105 | orchestrator | Thursday 16 April 2026 06:53:30 +0000 (0:00:00.110) 0:00:10.399 ******** 2026-04-16 06:53:36.216116 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.216127 | orchestrator | 2026-04-16 06:53:36.216138 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-16 06:53:36.216148 | orchestrator | Thursday 16 April 2026 06:53:30 +0000 (0:00:00.124) 0:00:10.524 ******** 2026-04-16 06:53:36.216159 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.216208 | orchestrator | 2026-04-16 06:53:36.216226 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-16 06:53:36.216245 | orchestrator | Thursday 16 April 2026 06:53:30 +0000 (0:00:00.125) 0:00:10.649 ******** 2026-04-16 06:53:36.216272 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:53:36.216293 | orchestrator | 2026-04-16 06:53:36.216314 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-16 06:53:36.216334 | orchestrator | Thursday 16 April 2026 06:53:32 +0000 (0:00:01.368) 0:00:12.018 ******** 2026-04-16 06:53:36.216354 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.216373 | orchestrator | 2026-04-16 06:53:36.216384 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-16 06:53:36.216395 | orchestrator | Thursday 16 April 2026 06:53:32 +0000 (0:00:00.289) 0:00:12.308 ******** 2026-04-16 06:53:36.216406 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.216416 | orchestrator | 2026-04-16 06:53:36.216427 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-16 06:53:36.216437 | orchestrator | Thursday 16 April 2026 06:53:32 +0000 (0:00:00.152) 0:00:12.460 ******** 2026-04-16 06:53:36.216448 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:53:36.216459 | orchestrator | 2026-04-16 06:53:36.216469 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-16 06:53:36.216480 | orchestrator | Thursday 16 April 2026 06:53:32 +0000 (0:00:00.146) 0:00:12.607 ******** 2026-04-16 06:53:36.216490 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.216501 | orchestrator | 2026-04-16 06:53:36.216519 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-16 06:53:36.216530 | orchestrator | Thursday 16 April 2026 06:53:32 +0000 (0:00:00.137) 0:00:12.744 ******** 2026-04-16 06:53:36.216541 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.216552 | orchestrator | 2026-04-16 06:53:36.216562 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-16 06:53:36.216573 | orchestrator | Thursday 16 April 2026 06:53:33 +0000 (0:00:00.364) 0:00:13.108 ******** 2026-04-16 06:53:36.216583 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:36.216594 | orchestrator | 2026-04-16 06:53:36.216605 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-16 06:53:36.216626 | orchestrator | Thursday 16 April 2026 06:53:33 +0000 (0:00:00.281) 0:00:13.390 ******** 2026-04-16 06:53:36.216637 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:53:36.216647 | orchestrator | 2026-04-16 06:53:36.216658 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 06:53:36.216668 | orchestrator | Thursday 16 April 2026 06:53:33 +0000 (0:00:00.248) 0:00:13.639 ******** 2026-04-16 06:53:36.216679 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:36.216690 | orchestrator | 2026-04-16 06:53:36.216700 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 06:53:36.216711 | orchestrator | Thursday 16 April 2026 06:53:35 +0000 (0:00:01.692) 0:00:15.331 ******** 2026-04-16 06:53:36.216721 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:36.216732 | orchestrator | 2026-04-16 06:53:36.216743 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 06:53:36.216753 | orchestrator | Thursday 16 April 2026 06:53:35 +0000 (0:00:00.254) 0:00:15.586 ******** 2026-04-16 06:53:36.216764 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:36.216775 | orchestrator | 2026-04-16 06:53:36.216796 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:53:38.826647 | orchestrator | Thursday 16 April 2026 06:53:35 +0000 (0:00:00.250) 0:00:15.836 ******** 2026-04-16 06:53:38.826764 | orchestrator | 2026-04-16 06:53:38.826786 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:53:38.826801 | orchestrator | Thursday 16 April 2026 06:53:36 +0000 (0:00:00.067) 0:00:15.904 ******** 2026-04-16 06:53:38.826816 | orchestrator | 2026-04-16 06:53:38.826832 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:53:38.826848 | orchestrator | Thursday 16 April 2026 06:53:36 +0000 (0:00:00.068) 0:00:15.972 ******** 2026-04-16 06:53:38.826862 | orchestrator | 2026-04-16 06:53:38.826871 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-16 06:53:38.826880 | orchestrator | Thursday 16 April 2026 06:53:36 +0000 (0:00:00.071) 0:00:16.043 ******** 2026-04-16 06:53:38.826889 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:53:38.826898 | orchestrator | 2026-04-16 06:53:38.826907 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 06:53:38.826915 | orchestrator | Thursday 16 April 2026 06:53:37 +0000 (0:00:01.476) 0:00:17.520 ******** 2026-04-16 06:53:38.826924 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-16 06:53:38.826933 | orchestrator |  "msg": [ 2026-04-16 06:53:38.826942 | orchestrator |  "Validator run completed.", 2026-04-16 06:53:38.826952 | orchestrator |  "You can find the report file here:", 2026-04-16 06:53:38.826961 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-16T06:53:21+00:00-report.json", 2026-04-16 06:53:38.826970 | orchestrator |  "on the following host:", 2026-04-16 06:53:38.826979 | orchestrator |  "testbed-manager" 2026-04-16 06:53:38.826988 | orchestrator |  ] 2026-04-16 06:53:38.826997 | orchestrator | } 2026-04-16 06:53:38.827006 | orchestrator | 2026-04-16 06:53:38.827014 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:53:38.827024 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-16 06:53:38.827034 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:38.827044 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:53:38.827052 | orchestrator | 2026-04-16 06:53:38.827061 | orchestrator | 2026-04-16 06:53:38.827071 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:53:38.827123 | orchestrator | Thursday 16 April 2026 06:53:38 +0000 (0:00:00.850) 0:00:18.371 ******** 2026-04-16 06:53:38.827141 | orchestrator | =============================================================================== 2026-04-16 06:53:38.827155 | orchestrator | Aggregate test results step one ----------------------------------------- 1.69s 2026-04-16 06:53:38.827166 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.54s 2026-04-16 06:53:38.827207 | orchestrator | Write report file ------------------------------------------------------- 1.48s 2026-04-16 06:53:38.827222 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2026-04-16 06:53:38.827237 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-04-16 06:53:38.827266 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2026-04-16 06:53:38.827280 | orchestrator | Get timestamp for report file ------------------------------------------- 0.92s 2026-04-16 06:53:38.827296 | orchestrator | Print report file information ------------------------------------------- 0.85s 2026-04-16 06:53:38.827331 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-04-16 06:53:38.827346 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.51s 2026-04-16 06:53:38.827361 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2026-04-16 06:53:38.827374 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.36s 2026-04-16 06:53:38.827388 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2026-04-16 06:53:38.827403 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.35s 2026-04-16 06:53:38.827418 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2026-04-16 06:53:38.827432 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2026-04-16 06:53:38.827446 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2026-04-16 06:53:38.827461 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-04-16 06:53:38.827476 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2026-04-16 06:53:38.827490 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2026-04-16 06:53:39.136884 | orchestrator | + osism validate ceph-mgrs 2026-04-16 06:54:09.846247 | orchestrator | 2026-04-16 06:54:09.846381 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-16 06:54:09.846405 | orchestrator | 2026-04-16 06:54:09.846422 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-16 06:54:09.846441 | orchestrator | Thursday 16 April 2026 06:53:55 +0000 (0:00:00.438) 0:00:00.438 ******** 2026-04-16 06:54:09.846461 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.846479 | orchestrator | 2026-04-16 06:54:09.846497 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-16 06:54:09.846510 | orchestrator | Thursday 16 April 2026 06:53:56 +0000 (0:00:00.835) 0:00:01.274 ******** 2026-04-16 06:54:09.846520 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.846529 | orchestrator | 2026-04-16 06:54:09.846539 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-16 06:54:09.846549 | orchestrator | Thursday 16 April 2026 06:53:57 +0000 (0:00:00.993) 0:00:02.267 ******** 2026-04-16 06:54:09.846559 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.846570 | orchestrator | 2026-04-16 06:54:09.846580 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-16 06:54:09.846590 | orchestrator | Thursday 16 April 2026 06:53:57 +0000 (0:00:00.127) 0:00:02.395 ******** 2026-04-16 06:54:09.846600 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.846609 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:54:09.846619 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:54:09.846629 | orchestrator | 2026-04-16 06:54:09.846662 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-16 06:54:09.846679 | orchestrator | Thursday 16 April 2026 06:53:58 +0000 (0:00:00.317) 0:00:02.713 ******** 2026-04-16 06:54:09.846695 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.846711 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:54:09.846726 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:54:09.846742 | orchestrator | 2026-04-16 06:54:09.846758 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-16 06:54:09.846773 | orchestrator | Thursday 16 April 2026 06:53:59 +0000 (0:00:01.058) 0:00:03.771 ******** 2026-04-16 06:54:09.846788 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.846802 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:54:09.846816 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:54:09.846832 | orchestrator | 2026-04-16 06:54:09.846847 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-16 06:54:09.846863 | orchestrator | Thursday 16 April 2026 06:53:59 +0000 (0:00:00.282) 0:00:04.054 ******** 2026-04-16 06:54:09.846879 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.846895 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:54:09.846911 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:54:09.846925 | orchestrator | 2026-04-16 06:54:09.846940 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:54:09.846956 | orchestrator | Thursday 16 April 2026 06:53:59 +0000 (0:00:00.480) 0:00:04.535 ******** 2026-04-16 06:54:09.846971 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.846987 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:54:09.847003 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:54:09.847017 | orchestrator | 2026-04-16 06:54:09.847033 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-16 06:54:09.847049 | orchestrator | Thursday 16 April 2026 06:54:00 +0000 (0:00:00.332) 0:00:04.867 ******** 2026-04-16 06:54:09.847064 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.847081 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:54:09.847097 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:54:09.847112 | orchestrator | 2026-04-16 06:54:09.847128 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-16 06:54:09.847142 | orchestrator | Thursday 16 April 2026 06:54:00 +0000 (0:00:00.327) 0:00:05.195 ******** 2026-04-16 06:54:09.847157 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.847173 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:54:09.847218 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:54:09.847237 | orchestrator | 2026-04-16 06:54:09.847254 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 06:54:09.847271 | orchestrator | Thursday 16 April 2026 06:54:00 +0000 (0:00:00.492) 0:00:05.688 ******** 2026-04-16 06:54:09.847287 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.847303 | orchestrator | 2026-04-16 06:54:09.847321 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 06:54:09.847338 | orchestrator | Thursday 16 April 2026 06:54:01 +0000 (0:00:00.270) 0:00:05.959 ******** 2026-04-16 06:54:09.847355 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.847371 | orchestrator | 2026-04-16 06:54:09.847388 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 06:54:09.847406 | orchestrator | Thursday 16 April 2026 06:54:01 +0000 (0:00:00.255) 0:00:06.214 ******** 2026-04-16 06:54:09.847423 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.847439 | orchestrator | 2026-04-16 06:54:09.847454 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:09.847470 | orchestrator | Thursday 16 April 2026 06:54:01 +0000 (0:00:00.246) 0:00:06.461 ******** 2026-04-16 06:54:09.847486 | orchestrator | 2026-04-16 06:54:09.847503 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:09.847520 | orchestrator | Thursday 16 April 2026 06:54:01 +0000 (0:00:00.071) 0:00:06.532 ******** 2026-04-16 06:54:09.847536 | orchestrator | 2026-04-16 06:54:09.847552 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:09.847589 | orchestrator | Thursday 16 April 2026 06:54:01 +0000 (0:00:00.069) 0:00:06.601 ******** 2026-04-16 06:54:09.847606 | orchestrator | 2026-04-16 06:54:09.847620 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 06:54:09.847635 | orchestrator | Thursday 16 April 2026 06:54:01 +0000 (0:00:00.074) 0:00:06.676 ******** 2026-04-16 06:54:09.847649 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.847663 | orchestrator | 2026-04-16 06:54:09.847679 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-16 06:54:09.847695 | orchestrator | Thursday 16 April 2026 06:54:02 +0000 (0:00:00.250) 0:00:06.927 ******** 2026-04-16 06:54:09.847712 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.847728 | orchestrator | 2026-04-16 06:54:09.847774 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-16 06:54:09.847790 | orchestrator | Thursday 16 April 2026 06:54:02 +0000 (0:00:00.243) 0:00:07.171 ******** 2026-04-16 06:54:09.847807 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.847822 | orchestrator | 2026-04-16 06:54:09.847836 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-16 06:54:09.847851 | orchestrator | Thursday 16 April 2026 06:54:02 +0000 (0:00:00.126) 0:00:07.297 ******** 2026-04-16 06:54:09.847866 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:54:09.847883 | orchestrator | 2026-04-16 06:54:09.847900 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-16 06:54:09.847917 | orchestrator | Thursday 16 April 2026 06:54:04 +0000 (0:00:01.869) 0:00:09.167 ******** 2026-04-16 06:54:09.847933 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.847950 | orchestrator | 2026-04-16 06:54:09.847967 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-16 06:54:09.847984 | orchestrator | Thursday 16 April 2026 06:54:04 +0000 (0:00:00.408) 0:00:09.575 ******** 2026-04-16 06:54:09.848002 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.848019 | orchestrator | 2026-04-16 06:54:09.848036 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-16 06:54:09.848052 | orchestrator | Thursday 16 April 2026 06:54:05 +0000 (0:00:00.338) 0:00:09.914 ******** 2026-04-16 06:54:09.848069 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.848085 | orchestrator | 2026-04-16 06:54:09.848102 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-16 06:54:09.848118 | orchestrator | Thursday 16 April 2026 06:54:05 +0000 (0:00:00.133) 0:00:10.047 ******** 2026-04-16 06:54:09.848134 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:54:09.848151 | orchestrator | 2026-04-16 06:54:09.848168 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-16 06:54:09.848184 | orchestrator | Thursday 16 April 2026 06:54:05 +0000 (0:00:00.133) 0:00:10.181 ******** 2026-04-16 06:54:09.848275 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.848292 | orchestrator | 2026-04-16 06:54:09.848309 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-16 06:54:09.848325 | orchestrator | Thursday 16 April 2026 06:54:05 +0000 (0:00:00.239) 0:00:10.421 ******** 2026-04-16 06:54:09.848343 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:54:09.848359 | orchestrator | 2026-04-16 06:54:09.848375 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 06:54:09.848415 | orchestrator | Thursday 16 April 2026 06:54:05 +0000 (0:00:00.240) 0:00:10.661 ******** 2026-04-16 06:54:09.848432 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.848448 | orchestrator | 2026-04-16 06:54:09.848465 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 06:54:09.848482 | orchestrator | Thursday 16 April 2026 06:54:07 +0000 (0:00:01.251) 0:00:11.913 ******** 2026-04-16 06:54:09.848498 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.848514 | orchestrator | 2026-04-16 06:54:09.848547 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 06:54:09.848564 | orchestrator | Thursday 16 April 2026 06:54:07 +0000 (0:00:00.257) 0:00:12.170 ******** 2026-04-16 06:54:09.848580 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.848597 | orchestrator | 2026-04-16 06:54:09.848613 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:09.848630 | orchestrator | Thursday 16 April 2026 06:54:07 +0000 (0:00:00.237) 0:00:12.407 ******** 2026-04-16 06:54:09.848646 | orchestrator | 2026-04-16 06:54:09.848662 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:09.848702 | orchestrator | Thursday 16 April 2026 06:54:07 +0000 (0:00:00.068) 0:00:12.476 ******** 2026-04-16 06:54:09.848719 | orchestrator | 2026-04-16 06:54:09.848735 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:09.848751 | orchestrator | Thursday 16 April 2026 06:54:07 +0000 (0:00:00.068) 0:00:12.545 ******** 2026-04-16 06:54:09.848767 | orchestrator | 2026-04-16 06:54:09.848783 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-16 06:54:09.848799 | orchestrator | Thursday 16 April 2026 06:54:08 +0000 (0:00:00.283) 0:00:12.829 ******** 2026-04-16 06:54:09.848815 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:09.848832 | orchestrator | 2026-04-16 06:54:09.848848 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 06:54:09.848863 | orchestrator | Thursday 16 April 2026 06:54:09 +0000 (0:00:01.295) 0:00:14.124 ******** 2026-04-16 06:54:09.848885 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-16 06:54:09.848902 | orchestrator |  "msg": [ 2026-04-16 06:54:09.848918 | orchestrator |  "Validator run completed.", 2026-04-16 06:54:09.848934 | orchestrator |  "You can find the report file here:", 2026-04-16 06:54:09.848951 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-16T06:53:56+00:00-report.json", 2026-04-16 06:54:09.848969 | orchestrator |  "on the following host:", 2026-04-16 06:54:09.848985 | orchestrator |  "testbed-manager" 2026-04-16 06:54:09.849001 | orchestrator |  ] 2026-04-16 06:54:09.849017 | orchestrator | } 2026-04-16 06:54:09.849034 | orchestrator | 2026-04-16 06:54:09.849049 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:54:09.849067 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 06:54:09.849085 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:54:09.849117 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:54:10.203684 | orchestrator | 2026-04-16 06:54:10.203775 | orchestrator | 2026-04-16 06:54:10.203786 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:54:10.203794 | orchestrator | Thursday 16 April 2026 06:54:09 +0000 (0:00:00.397) 0:00:14.522 ******** 2026-04-16 06:54:10.203800 | orchestrator | =============================================================================== 2026-04-16 06:54:10.203806 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.87s 2026-04-16 06:54:10.203812 | orchestrator | Write report file ------------------------------------------------------- 1.30s 2026-04-16 06:54:10.203819 | orchestrator | Aggregate test results step one ----------------------------------------- 1.25s 2026-04-16 06:54:10.203825 | orchestrator | Get container info ------------------------------------------------------ 1.06s 2026-04-16 06:54:10.203831 | orchestrator | Create report output directory ------------------------------------------ 0.99s 2026-04-16 06:54:10.203836 | orchestrator | Get timestamp for report file ------------------------------------------- 0.84s 2026-04-16 06:54:10.203842 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.49s 2026-04-16 06:54:10.203867 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2026-04-16 06:54:10.203874 | orchestrator | Flush handlers ---------------------------------------------------------- 0.42s 2026-04-16 06:54:10.203880 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.41s 2026-04-16 06:54:10.203885 | orchestrator | Print report file information ------------------------------------------- 0.40s 2026-04-16 06:54:10.203891 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.34s 2026-04-16 06:54:10.203897 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-04-16 06:54:10.203902 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2026-04-16 06:54:10.203908 | orchestrator | Prepare test data for container existance test -------------------------- 0.32s 2026-04-16 06:54:10.203914 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2026-04-16 06:54:10.203920 | orchestrator | Aggregate test results step one ----------------------------------------- 0.27s 2026-04-16 06:54:10.203925 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-16 06:54:10.203931 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2026-04-16 06:54:10.203936 | orchestrator | Print report file information ------------------------------------------- 0.25s 2026-04-16 06:54:10.526687 | orchestrator | + osism validate ceph-osds 2026-04-16 06:54:31.474417 | orchestrator | 2026-04-16 06:54:31.474533 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-16 06:54:31.474549 | orchestrator | 2026-04-16 06:54:31.474561 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-16 06:54:31.474573 | orchestrator | Thursday 16 April 2026 06:54:27 +0000 (0:00:00.416) 0:00:00.416 ******** 2026-04-16 06:54:31.474584 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:31.474595 | orchestrator | 2026-04-16 06:54:31.474605 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 06:54:31.474616 | orchestrator | Thursday 16 April 2026 06:54:27 +0000 (0:00:00.828) 0:00:01.245 ******** 2026-04-16 06:54:31.474627 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:31.474637 | orchestrator | 2026-04-16 06:54:31.474648 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-16 06:54:31.474658 | orchestrator | Thursday 16 April 2026 06:54:28 +0000 (0:00:00.542) 0:00:01.787 ******** 2026-04-16 06:54:31.474669 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:31.474679 | orchestrator | 2026-04-16 06:54:31.474690 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-16 06:54:31.474701 | orchestrator | Thursday 16 April 2026 06:54:29 +0000 (0:00:00.710) 0:00:02.498 ******** 2026-04-16 06:54:31.474711 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:31.474723 | orchestrator | 2026-04-16 06:54:31.474735 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-16 06:54:31.474746 | orchestrator | Thursday 16 April 2026 06:54:29 +0000 (0:00:00.127) 0:00:02.625 ******** 2026-04-16 06:54:31.474756 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:31.474767 | orchestrator | 2026-04-16 06:54:31.474777 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-16 06:54:31.474804 | orchestrator | Thursday 16 April 2026 06:54:29 +0000 (0:00:00.126) 0:00:02.752 ******** 2026-04-16 06:54:31.474816 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:31.474826 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:31.474837 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:31.474847 | orchestrator | 2026-04-16 06:54:31.474858 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-16 06:54:31.474869 | orchestrator | Thursday 16 April 2026 06:54:29 +0000 (0:00:00.330) 0:00:03.082 ******** 2026-04-16 06:54:31.474879 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:31.474914 | orchestrator | 2026-04-16 06:54:31.474925 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-16 06:54:31.474936 | orchestrator | Thursday 16 April 2026 06:54:29 +0000 (0:00:00.143) 0:00:03.226 ******** 2026-04-16 06:54:31.474948 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:31.474960 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:31.474971 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:31.474983 | orchestrator | 2026-04-16 06:54:31.474995 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-16 06:54:31.475007 | orchestrator | Thursday 16 April 2026 06:54:30 +0000 (0:00:00.316) 0:00:03.542 ******** 2026-04-16 06:54:31.475019 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:31.475031 | orchestrator | 2026-04-16 06:54:31.475041 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:54:31.475052 | orchestrator | Thursday 16 April 2026 06:54:30 +0000 (0:00:00.765) 0:00:04.308 ******** 2026-04-16 06:54:31.475062 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:31.475073 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:31.475084 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:31.475095 | orchestrator | 2026-04-16 06:54:31.475105 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-16 06:54:31.475116 | orchestrator | Thursday 16 April 2026 06:54:31 +0000 (0:00:00.302) 0:00:04.610 ******** 2026-04-16 06:54:31.475129 | orchestrator | skipping: [testbed-node-3] => (item={'id': '106b6f008c4a6c6c987d69a876c71fbfc2e6353426d134f9fdb73f2b82c279be', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-16 06:54:31.475144 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'df977df4e602597da48773ac280bbf86f525f8f8c132e5af5f9485f30f5b8ff4', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-16 06:54:31.475156 | orchestrator | skipping: [testbed-node-3] => (item={'id': '64d9dcebda610805821a0ba7899467c14aa40cc3e22c11878d9caff25f393037', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-16 06:54:31.475168 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e0c9b1fde6c4602aad2051e78e9992f593b415de4a3e2ca014c19c5a927fd18b', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-04-16 06:54:31.475179 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a38f77bbe4e5d46dab7614631633b06e5a81183c2c88e55944aedc96343865c2', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-04-16 06:54:31.475274 | orchestrator | skipping: [testbed-node-3] => (item={'id': '97fa04180e8e85e5f8a85cddd0bcce03bd28166986a13e380a3937d899057889', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-04-16 06:54:31.475290 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b4558667e668f1c6d85bdce022f8dfc1878e03b9aa8f2b2a8cf0d3f4dd04bdcb', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-16 06:54:31.475301 | orchestrator | skipping: [testbed-node-3] => (item={'id': '35894b6067c8382cee49e2a687b210e88217659d0af60937ad4d8b025a3f8232', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-04-16 06:54:31.475312 | orchestrator | skipping: [testbed-node-3] => (item={'id': '617fe0f7e205c8478e2bb1c5263d42541442316c0694e602f8603e668854d8c5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.475333 | orchestrator | skipping: [testbed-node-3] => (item={'id': '68599360bfe359cf3561efed53ae8424327a8ec20bc030b8e3d0a89e32d1ac8b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.475353 | orchestrator | skipping: [testbed-node-3] => (item={'id': '818b270fdd3c11fd270104af54254e44454dc3ae555df4fe45323ec3f38a4b7e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.475381 | orchestrator | ok: [testbed-node-3] => (item={'id': '96247929e612fc244c45f32803e448088778367a9d964183ba9142bff1293a9d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-16 06:54:31.475407 | orchestrator | ok: [testbed-node-3] => (item={'id': '8265461d32314c78bd8e4fa64f19b91561f7f39e99151fa0665e1cc4d3e99252', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-16 06:54:31.475425 | orchestrator | skipping: [testbed-node-3] => (item={'id': '591ea47e5b9c6fb83a1c9dfe900e908373c49a1ff7e13cc575754d0021fb47ec', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.475445 | orchestrator | skipping: [testbed-node-3] => (item={'id': '00623e99ac6965ed4912941a87fee884d61a3fcb7470f2305165cbad0075d601', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 06:54:31.475465 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10d3933025e084fb95d07cc36ac82b4b7f63325decbd5fa718e9051f6d9cce3c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 06:54:31.475485 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8babeec9ded36573596ec48dddc72931960ac259d65b0878a5cf6e6a5c54a027', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:31.475502 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca94e4ff7f6f2f0e1a58a8e3fbafb505b8873e42289c917e888b93a7575027d8', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:31.475519 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cf60015f552b2e7c794c44008041b36313adbd75dbe28395f4d4365704ddce51', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-16 06:54:31.475536 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1bcfcbf78bde7e6895e761905aaae21c36d8dd3dbd998eb0b4fe06ac588dbea3', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:31.475566 | orchestrator | skipping: [testbed-node-4] => (item={'id': '553c45261bf0042dc0e2d4b9861810d7b007356d9ca3e66c92956b14d2ea36f3', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-16 06:54:31.718948 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'adea6ee0c5d9ce0f14ee6000bd2576375216969b0b944902a5ec4c264ab32517', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-16 06:54:31.719056 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5395e9c75d67084d09f0bdd646c14eb7cd1e51b5fc80f7610c7424d5a9113606', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-04-16 06:54:31.719067 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3d67264b73d0d1f569c8778b2a06c4d795c12b13aa12b08ca16ccc3405444a11', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-04-16 06:54:31.719074 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cb7bc08576fb6e1e721e07465909508c2ef828550af01fa48b3c8574ffac41d9', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-04-16 06:54:31.719078 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cf009f9bbe0976c7e726eb3d6124d8e724cc8e448753510e76e263855d0e9c5f', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-16 06:54:31.719114 | orchestrator | skipping: [testbed-node-4] => (item={'id': '683b7edf34c03c2e98b560532bc837802f5af0dd980b3ab990285a18d65978ae', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-04-16 06:54:31.719119 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fa6ec673033393a1d8b2bf267ec530bc2641d2570289fb9f775b4f2f881091ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719124 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e018e75a7189b4a37924c1df7c3aded686b0ef50e5d0f40478493ad74a6f619c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719129 | orchestrator | skipping: [testbed-node-4] => (item={'id': '32c19d67d7c305c77efec81b7892e36a1d32f59342df967833733edd2bd83434', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719135 | orchestrator | ok: [testbed-node-4] => (item={'id': 'e80864da91ed0484d2fe1dc536afa38764fc1ff64ebcfa034638238a82d0f7fc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-16 06:54:31.719140 | orchestrator | ok: [testbed-node-4] => (item={'id': '2e2a5e41852cfffd0d445b9b769808c7e32b63e254426114fee4047dc663810e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-16 06:54:31.719144 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd487e066b04428db83c87d559fb06b4275221b5d75f366cdc4c8e180b616eef1', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719148 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96f9f3498575ec4452d371057d3c92b97c9434599295e13ba94900f9c8eab05e', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 06:54:31.719152 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b4913a1e26fd5e5ee4fad181966ee1a03cb9839878ba51738960238282c065e8', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 06:54:31.719169 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bbccec5e97ab13210855166d703c1e5dcf3be5872a8894a7456f19b6a2bf7bc4', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:31.719177 | orchestrator | skipping: [testbed-node-4] => (item={'id': '17c9bbb87a7df009010e06210945161d6f289eff878a6944253628d76546f253', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:31.719181 | orchestrator | skipping: [testbed-node-4] => (item={'id': '017d51619327bae3ae564c87f6ac51df99379bbc4c4e1bfadb36435ab0362c6e', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:31.719185 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c216fed4d32669e78db363b75d584061842bb0210f1e116c58a6adcc6898b930', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-04-16 06:54:31.719192 | orchestrator | skipping: [testbed-node-5] => (item={'id': '475042cffca16460b9916f5a9d11c2c4c1922e804d6a3fcf4b39876416786b46', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-16 06:54:31.719196 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e2d1a823ef1a7f885ad298d1dac83b514d67719bd4ac67dfea45b0b00ef63237', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-04-16 06:54:31.719253 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0b6825b3303fd097000805dfe4aedf844044ec02d14a1dffa06f0ebafeb7541b', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 19 minutes (unhealthy)'})  2026-04-16 06:54:31.719257 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f646a75b14d66388eb045ee879cfee596f9b934b1b54262a7d38882feed607de', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-04-16 06:54:31.719261 | orchestrator | skipping: [testbed-node-5] => (item={'id': '22e7dcdbe1de810170732564df69ea193f76d30c9df0adee5a0ce709c93aca62', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 39 minutes (healthy)'})  2026-04-16 06:54:31.719265 | orchestrator | skipping: [testbed-node-5] => (item={'id': '79c63e9be4eed0b6d936954bd85d0ddc69e378f98100eff7eb98001379f66f5b', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-04-16 06:54:31.719269 | orchestrator | skipping: [testbed-node-5] => (item={'id': '623922abaf89451f3f4b694d5de79082381cf38085a7aa7a95d9d93bffd70e8f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-04-16 06:54:31.719273 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12f97ab4acb8ac797d38117aa4f1a12a6ddd29cd0440b2e26188218e0ae1e7be', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719277 | orchestrator | skipping: [testbed-node-5] => (item={'id': '137fb12d5f94eaabd65f095fbcad4b9360d6b7b7a0ac6e0b6ef92816d24bd8e1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719281 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd66ccd324b7f85542eeedade6f4f2a285603bfb844656432008a42d10f68ac10', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:31.719288 | orchestrator | ok: [testbed-node-5] => (item={'id': '7c0f71d18132c0cb564717983b68dda3e38770c5c3c6151c1903527beb32b73c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-16 06:54:31.719297 | orchestrator | ok: [testbed-node-5] => (item={'id': '46dc9d722e3ca4d569c70b7af26006857563fb22a0d8e163ef39386d79f070a0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-04-16 06:54:42.755873 | orchestrator | skipping: [testbed-node-5] => (item={'id': '95005d5eeafd2a72db038680992ba193cda56ef1dc1ff1df55ade785883f3730', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 06:54:42.755976 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0c90d317fb75d255db28e5f9de86d6804c959155da131f7663d9ec08a57c23da', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 06:54:42.755989 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b6d9734b5ccdccd728efd82cf688acfd7eea2a6390924721c0735f67523fb0e0', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 06:54:42.756012 | orchestrator | skipping: [testbed-node-5] => (item={'id': '778fbe6df24bb3c54b20dd708f41877010ef37e09daace6d6b151dcbb2e4cb08', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:42.756021 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f2c98beac013a2828a4b06d61dcf7ffd93ae38bcc4060778ac8a4a98f401f38', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:42.756029 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f08e0b05e91ee703e1f65dd65d9342ad79c37e6a9a1a3a3d41648a609d97254c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 06:54:42.756036 | orchestrator | 2026-04-16 06:54:42.756044 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-16 06:54:42.756052 | orchestrator | Thursday 16 April 2026 06:54:31 +0000 (0:00:00.462) 0:00:05.073 ******** 2026-04-16 06:54:42.756059 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756067 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756074 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756081 | orchestrator | 2026-04-16 06:54:42.756087 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-16 06:54:42.756093 | orchestrator | Thursday 16 April 2026 06:54:31 +0000 (0:00:00.290) 0:00:05.363 ******** 2026-04-16 06:54:42.756100 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756108 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:42.756115 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:42.756122 | orchestrator | 2026-04-16 06:54:42.756129 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-16 06:54:42.756135 | orchestrator | Thursday 16 April 2026 06:54:32 +0000 (0:00:00.496) 0:00:05.859 ******** 2026-04-16 06:54:42.756142 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756148 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756155 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756162 | orchestrator | 2026-04-16 06:54:42.756168 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:54:42.756175 | orchestrator | Thursday 16 April 2026 06:54:32 +0000 (0:00:00.317) 0:00:06.177 ******** 2026-04-16 06:54:42.756199 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756230 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756236 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756243 | orchestrator | 2026-04-16 06:54:42.756248 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-16 06:54:42.756255 | orchestrator | Thursday 16 April 2026 06:54:33 +0000 (0:00:00.298) 0:00:06.475 ******** 2026-04-16 06:54:42.756262 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-16 06:54:42.756270 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-16 06:54:42.756277 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756284 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-16 06:54:42.756290 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-16 06:54:42.756297 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:42.756304 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-16 06:54:42.756310 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-16 06:54:42.756316 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:42.756323 | orchestrator | 2026-04-16 06:54:42.756329 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-16 06:54:42.756336 | orchestrator | Thursday 16 April 2026 06:54:33 +0000 (0:00:00.308) 0:00:06.783 ******** 2026-04-16 06:54:42.756343 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756350 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756357 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756363 | orchestrator | 2026-04-16 06:54:42.756370 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-16 06:54:42.756377 | orchestrator | Thursday 16 April 2026 06:54:33 +0000 (0:00:00.465) 0:00:07.249 ******** 2026-04-16 06:54:42.756384 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756407 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:42.756414 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:42.756420 | orchestrator | 2026-04-16 06:54:42.756426 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-16 06:54:42.756434 | orchestrator | Thursday 16 April 2026 06:54:34 +0000 (0:00:00.289) 0:00:07.539 ******** 2026-04-16 06:54:42.756441 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756449 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:42.756456 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:42.756463 | orchestrator | 2026-04-16 06:54:42.756471 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-16 06:54:42.756478 | orchestrator | Thursday 16 April 2026 06:54:34 +0000 (0:00:00.288) 0:00:07.827 ******** 2026-04-16 06:54:42.756485 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756492 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756500 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756507 | orchestrator | 2026-04-16 06:54:42.756515 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 06:54:42.756523 | orchestrator | Thursday 16 April 2026 06:54:34 +0000 (0:00:00.302) 0:00:08.130 ******** 2026-04-16 06:54:42.756531 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756538 | orchestrator | 2026-04-16 06:54:42.756546 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 06:54:42.756553 | orchestrator | Thursday 16 April 2026 06:54:35 +0000 (0:00:00.724) 0:00:08.855 ******** 2026-04-16 06:54:42.756566 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756574 | orchestrator | 2026-04-16 06:54:42.756581 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 06:54:42.756588 | orchestrator | Thursday 16 April 2026 06:54:35 +0000 (0:00:00.261) 0:00:09.116 ******** 2026-04-16 06:54:42.756608 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756615 | orchestrator | 2026-04-16 06:54:42.756623 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:42.756630 | orchestrator | Thursday 16 April 2026 06:54:35 +0000 (0:00:00.252) 0:00:09.369 ******** 2026-04-16 06:54:42.756637 | orchestrator | 2026-04-16 06:54:42.756645 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:42.756652 | orchestrator | Thursday 16 April 2026 06:54:36 +0000 (0:00:00.068) 0:00:09.438 ******** 2026-04-16 06:54:42.756659 | orchestrator | 2026-04-16 06:54:42.756667 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:42.756674 | orchestrator | Thursday 16 April 2026 06:54:36 +0000 (0:00:00.068) 0:00:09.506 ******** 2026-04-16 06:54:42.756682 | orchestrator | 2026-04-16 06:54:42.756689 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 06:54:42.756696 | orchestrator | Thursday 16 April 2026 06:54:36 +0000 (0:00:00.068) 0:00:09.574 ******** 2026-04-16 06:54:42.756703 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756710 | orchestrator | 2026-04-16 06:54:42.756718 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-16 06:54:42.756725 | orchestrator | Thursday 16 April 2026 06:54:36 +0000 (0:00:00.247) 0:00:09.821 ******** 2026-04-16 06:54:42.756732 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756740 | orchestrator | 2026-04-16 06:54:42.756747 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:54:42.756754 | orchestrator | Thursday 16 April 2026 06:54:36 +0000 (0:00:00.239) 0:00:10.061 ******** 2026-04-16 06:54:42.756762 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756769 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756777 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756783 | orchestrator | 2026-04-16 06:54:42.756790 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-16 06:54:42.756797 | orchestrator | Thursday 16 April 2026 06:54:36 +0000 (0:00:00.282) 0:00:10.343 ******** 2026-04-16 06:54:42.756803 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756810 | orchestrator | 2026-04-16 06:54:42.756816 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-16 06:54:42.756822 | orchestrator | Thursday 16 April 2026 06:54:37 +0000 (0:00:00.604) 0:00:10.947 ******** 2026-04-16 06:54:42.756829 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 06:54:42.756835 | orchestrator | 2026-04-16 06:54:42.756840 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-16 06:54:42.756846 | orchestrator | Thursday 16 April 2026 06:54:39 +0000 (0:00:01.586) 0:00:12.534 ******** 2026-04-16 06:54:42.756853 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756860 | orchestrator | 2026-04-16 06:54:42.756867 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-16 06:54:42.756874 | orchestrator | Thursday 16 April 2026 06:54:39 +0000 (0:00:00.134) 0:00:12.668 ******** 2026-04-16 06:54:42.756881 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756888 | orchestrator | 2026-04-16 06:54:42.756894 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-16 06:54:42.756901 | orchestrator | Thursday 16 April 2026 06:54:39 +0000 (0:00:00.308) 0:00:12.977 ******** 2026-04-16 06:54:42.756907 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:42.756913 | orchestrator | 2026-04-16 06:54:42.756920 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-16 06:54:42.756926 | orchestrator | Thursday 16 April 2026 06:54:39 +0000 (0:00:00.112) 0:00:13.089 ******** 2026-04-16 06:54:42.756932 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756938 | orchestrator | 2026-04-16 06:54:42.756944 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:54:42.756950 | orchestrator | Thursday 16 April 2026 06:54:39 +0000 (0:00:00.126) 0:00:13.216 ******** 2026-04-16 06:54:42.756956 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:42.756969 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:42.756975 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:42.756982 | orchestrator | 2026-04-16 06:54:42.756988 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-16 06:54:42.756994 | orchestrator | Thursday 16 April 2026 06:54:40 +0000 (0:00:00.293) 0:00:13.509 ******** 2026-04-16 06:54:42.757000 | orchestrator | changed: [testbed-node-3] 2026-04-16 06:54:42.757007 | orchestrator | changed: [testbed-node-5] 2026-04-16 06:54:42.757014 | orchestrator | changed: [testbed-node-4] 2026-04-16 06:54:52.769616 | orchestrator | 2026-04-16 06:54:52.769693 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-16 06:54:52.769700 | orchestrator | Thursday 16 April 2026 06:54:42 +0000 (0:00:02.603) 0:00:16.112 ******** 2026-04-16 06:54:52.769704 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769709 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769713 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769717 | orchestrator | 2026-04-16 06:54:52.769721 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-16 06:54:52.769725 | orchestrator | Thursday 16 April 2026 06:54:43 +0000 (0:00:00.321) 0:00:16.433 ******** 2026-04-16 06:54:52.769729 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769733 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769736 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769740 | orchestrator | 2026-04-16 06:54:52.769744 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-16 06:54:52.769747 | orchestrator | Thursday 16 April 2026 06:54:43 +0000 (0:00:00.501) 0:00:16.935 ******** 2026-04-16 06:54:52.769751 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:52.769756 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:52.769760 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:52.769763 | orchestrator | 2026-04-16 06:54:52.769767 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-16 06:54:52.769771 | orchestrator | Thursday 16 April 2026 06:54:43 +0000 (0:00:00.318) 0:00:17.253 ******** 2026-04-16 06:54:52.769775 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769779 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769782 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769786 | orchestrator | 2026-04-16 06:54:52.769790 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-16 06:54:52.769794 | orchestrator | Thursday 16 April 2026 06:54:44 +0000 (0:00:00.508) 0:00:17.762 ******** 2026-04-16 06:54:52.769797 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:52.769801 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:52.769819 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:52.769823 | orchestrator | 2026-04-16 06:54:52.769827 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-16 06:54:52.769831 | orchestrator | Thursday 16 April 2026 06:54:44 +0000 (0:00:00.297) 0:00:18.060 ******** 2026-04-16 06:54:52.769834 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:52.769838 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:52.769842 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:52.769846 | orchestrator | 2026-04-16 06:54:52.769849 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 06:54:52.769853 | orchestrator | Thursday 16 April 2026 06:54:44 +0000 (0:00:00.304) 0:00:18.364 ******** 2026-04-16 06:54:52.769857 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769860 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769864 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769868 | orchestrator | 2026-04-16 06:54:52.769871 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-16 06:54:52.769875 | orchestrator | Thursday 16 April 2026 06:54:45 +0000 (0:00:00.500) 0:00:18.865 ******** 2026-04-16 06:54:52.769879 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769882 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769886 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769902 | orchestrator | 2026-04-16 06:54:52.769906 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-16 06:54:52.769910 | orchestrator | Thursday 16 April 2026 06:54:46 +0000 (0:00:00.783) 0:00:19.649 ******** 2026-04-16 06:54:52.769914 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769917 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769921 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769925 | orchestrator | 2026-04-16 06:54:52.769928 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-16 06:54:52.769932 | orchestrator | Thursday 16 April 2026 06:54:46 +0000 (0:00:00.312) 0:00:19.961 ******** 2026-04-16 06:54:52.769936 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:52.769939 | orchestrator | skipping: [testbed-node-4] 2026-04-16 06:54:52.769943 | orchestrator | skipping: [testbed-node-5] 2026-04-16 06:54:52.769947 | orchestrator | 2026-04-16 06:54:52.769950 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-16 06:54:52.769954 | orchestrator | Thursday 16 April 2026 06:54:46 +0000 (0:00:00.293) 0:00:20.255 ******** 2026-04-16 06:54:52.769958 | orchestrator | ok: [testbed-node-3] 2026-04-16 06:54:52.769961 | orchestrator | ok: [testbed-node-4] 2026-04-16 06:54:52.769965 | orchestrator | ok: [testbed-node-5] 2026-04-16 06:54:52.769969 | orchestrator | 2026-04-16 06:54:52.769972 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-16 06:54:52.769976 | orchestrator | Thursday 16 April 2026 06:54:47 +0000 (0:00:00.508) 0:00:20.763 ******** 2026-04-16 06:54:52.769980 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:52.769984 | orchestrator | 2026-04-16 06:54:52.769988 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-16 06:54:52.769991 | orchestrator | Thursday 16 April 2026 06:54:47 +0000 (0:00:00.259) 0:00:21.023 ******** 2026-04-16 06:54:52.769995 | orchestrator | skipping: [testbed-node-3] 2026-04-16 06:54:52.769999 | orchestrator | 2026-04-16 06:54:52.770002 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 06:54:52.770006 | orchestrator | Thursday 16 April 2026 06:54:47 +0000 (0:00:00.255) 0:00:21.278 ******** 2026-04-16 06:54:52.770010 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:52.770042 | orchestrator | 2026-04-16 06:54:52.770046 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 06:54:52.770050 | orchestrator | Thursday 16 April 2026 06:54:49 +0000 (0:00:01.679) 0:00:22.957 ******** 2026-04-16 06:54:52.770054 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:52.770057 | orchestrator | 2026-04-16 06:54:52.770061 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 06:54:52.770065 | orchestrator | Thursday 16 April 2026 06:54:49 +0000 (0:00:00.253) 0:00:23.211 ******** 2026-04-16 06:54:52.770069 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:52.770073 | orchestrator | 2026-04-16 06:54:52.770086 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:52.770090 | orchestrator | Thursday 16 April 2026 06:54:50 +0000 (0:00:00.249) 0:00:23.461 ******** 2026-04-16 06:54:52.770094 | orchestrator | 2026-04-16 06:54:52.770097 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:52.770101 | orchestrator | Thursday 16 April 2026 06:54:50 +0000 (0:00:00.071) 0:00:23.532 ******** 2026-04-16 06:54:52.770105 | orchestrator | 2026-04-16 06:54:52.770108 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 06:54:52.770112 | orchestrator | Thursday 16 April 2026 06:54:50 +0000 (0:00:00.071) 0:00:23.604 ******** 2026-04-16 06:54:52.770116 | orchestrator | 2026-04-16 06:54:52.770119 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-16 06:54:52.770123 | orchestrator | Thursday 16 April 2026 06:54:50 +0000 (0:00:00.084) 0:00:23.689 ******** 2026-04-16 06:54:52.770127 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 06:54:52.770135 | orchestrator | 2026-04-16 06:54:52.770139 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 06:54:52.770142 | orchestrator | Thursday 16 April 2026 06:54:51 +0000 (0:00:01.525) 0:00:25.215 ******** 2026-04-16 06:54:52.770146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-16 06:54:52.770150 | orchestrator |  "msg": [ 2026-04-16 06:54:52.770154 | orchestrator |  "Validator run completed.", 2026-04-16 06:54:52.770161 | orchestrator |  "You can find the report file here:", 2026-04-16 06:54:52.770165 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-16T06:54:27+00:00-report.json", 2026-04-16 06:54:52.770170 | orchestrator |  "on the following host:", 2026-04-16 06:54:52.770174 | orchestrator |  "testbed-manager" 2026-04-16 06:54:52.770178 | orchestrator |  ] 2026-04-16 06:54:52.770182 | orchestrator | } 2026-04-16 06:54:52.770186 | orchestrator | 2026-04-16 06:54:52.770190 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:54:52.770194 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 06:54:52.770200 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 06:54:52.770204 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 06:54:52.770207 | orchestrator | 2026-04-16 06:54:52.770244 | orchestrator | 2026-04-16 06:54:52.770248 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:54:52.770252 | orchestrator | Thursday 16 April 2026 06:54:52 +0000 (0:00:00.600) 0:00:25.815 ******** 2026-04-16 06:54:52.770255 | orchestrator | =============================================================================== 2026-04-16 06:54:52.770259 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.60s 2026-04-16 06:54:52.770263 | orchestrator | Aggregate test results step one ----------------------------------------- 1.68s 2026-04-16 06:54:52.770267 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2026-04-16 06:54:52.770270 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2026-04-16 06:54:52.770274 | orchestrator | Get timestamp for report file ------------------------------------------- 0.83s 2026-04-16 06:54:52.770278 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.78s 2026-04-16 06:54:52.770281 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.77s 2026-04-16 06:54:52.770285 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2026-04-16 06:54:52.770289 | orchestrator | Create report output directory ------------------------------------------ 0.71s 2026-04-16 06:54:52.770292 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.60s 2026-04-16 06:54:52.770296 | orchestrator | Print report file information ------------------------------------------- 0.60s 2026-04-16 06:54:52.770300 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.54s 2026-04-16 06:54:52.770303 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.51s 2026-04-16 06:54:52.770307 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.51s 2026-04-16 06:54:52.770311 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.50s 2026-04-16 06:54:52.770314 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-04-16 06:54:52.770318 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2026-04-16 06:54:52.770322 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2026-04-16 06:54:52.770325 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2026-04-16 06:54:52.770333 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.33s 2026-04-16 06:54:53.092943 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-16 06:54:53.097662 | orchestrator | + set -e 2026-04-16 06:54:53.097736 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 06:54:53.098868 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 06:54:53.098911 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 06:54:53.098921 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 06:54:53.098930 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 06:54:53.098939 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 06:54:53.098948 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 06:54:53.098957 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 06:54:53.098965 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 06:54:53.098974 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 06:54:53.098982 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 06:54:53.098991 | orchestrator | ++ export ARA=false 2026-04-16 06:54:53.099000 | orchestrator | ++ ARA=false 2026-04-16 06:54:53.099009 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 06:54:53.099017 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 06:54:53.099026 | orchestrator | ++ export TEMPEST=false 2026-04-16 06:54:53.099034 | orchestrator | ++ TEMPEST=false 2026-04-16 06:54:53.099042 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 06:54:53.099051 | orchestrator | ++ IS_ZUUL=true 2026-04-16 06:54:53.099059 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 06:54:53.099068 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 06:54:53.099076 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 06:54:53.099084 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 06:54:53.099093 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 06:54:53.099101 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 06:54:53.099110 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 06:54:53.099118 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 06:54:53.099126 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 06:54:53.099135 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 06:54:53.099143 | orchestrator | + source /etc/os-release 2026-04-16 06:54:53.099152 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-16 06:54:53.099164 | orchestrator | ++ NAME=Ubuntu 2026-04-16 06:54:53.099178 | orchestrator | ++ VERSION_ID=24.04 2026-04-16 06:54:53.099192 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-16 06:54:53.099206 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-16 06:54:53.099293 | orchestrator | ++ ID=ubuntu 2026-04-16 06:54:53.099309 | orchestrator | ++ ID_LIKE=debian 2026-04-16 06:54:53.099323 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-16 06:54:53.099337 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-16 06:54:53.099351 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-16 06:54:53.099366 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-16 06:54:53.099382 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-16 06:54:53.099396 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-16 06:54:53.099411 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-16 06:54:53.099439 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-16 06:54:53.099450 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-16 06:54:53.132145 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-16 06:55:14.424013 | orchestrator | 2026-04-16 06:55:14.424091 | orchestrator | # Status of Elasticsearch 2026-04-16 06:55:14.424097 | orchestrator | 2026-04-16 06:55:14.424102 | orchestrator | + pushd /opt/configuration/contrib 2026-04-16 06:55:14.424108 | orchestrator | + echo 2026-04-16 06:55:14.424113 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-16 06:55:14.424117 | orchestrator | + echo 2026-04-16 06:55:14.424122 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-16 06:55:14.629355 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-16 06:55:14.629544 | orchestrator | 2026-04-16 06:55:14.629559 | orchestrator | # Status of MariaDB 2026-04-16 06:55:14.629591 | orchestrator | 2026-04-16 06:55:14.629597 | orchestrator | + echo 2026-04-16 06:55:14.629604 | orchestrator | + echo '# Status of MariaDB' 2026-04-16 06:55:14.629610 | orchestrator | + echo 2026-04-16 06:55:14.629877 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-16 06:55:14.674213 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 06:55:14.674362 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 06:55:14.674378 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-16 06:55:14.674391 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-16 06:55:14.736708 | orchestrator | Reading package lists... 2026-04-16 06:55:15.089974 | orchestrator | Building dependency tree... 2026-04-16 06:55:15.091389 | orchestrator | Reading state information... 2026-04-16 06:55:15.419019 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-16 06:55:15.419113 | orchestrator | bc set to manually installed. 2026-04-16 06:55:15.419128 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2026-04-16 06:55:16.012391 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-16 06:55:16.014364 | orchestrator | 2026-04-16 06:55:16.014414 | orchestrator | # Status of Prometheus 2026-04-16 06:55:16.014426 | orchestrator | 2026-04-16 06:55:16.014436 | orchestrator | + echo 2026-04-16 06:55:16.014446 | orchestrator | + echo '# Status of Prometheus' 2026-04-16 06:55:16.014456 | orchestrator | + echo 2026-04-16 06:55:16.014466 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-16 06:55:16.073399 | orchestrator | Unauthorized 2026-04-16 06:55:16.076405 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-16 06:55:16.150278 | orchestrator | Unauthorized 2026-04-16 06:55:16.154248 | orchestrator | 2026-04-16 06:55:16.154395 | orchestrator | # Status of RabbitMQ 2026-04-16 06:55:16.154409 | orchestrator | 2026-04-16 06:55:16.154417 | orchestrator | + echo 2026-04-16 06:55:16.154424 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-16 06:55:16.154431 | orchestrator | + echo 2026-04-16 06:55:16.154446 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-16 06:55:16.213581 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 06:55:16.213678 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 06:55:16.213701 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-16 06:55:16.605934 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-16 06:55:16.614631 | orchestrator | 2026-04-16 06:55:16.614720 | orchestrator | # Status of Redis 2026-04-16 06:55:16.614735 | orchestrator | 2026-04-16 06:55:16.614746 | orchestrator | + echo 2026-04-16 06:55:16.614766 | orchestrator | + echo '# Status of Redis' 2026-04-16 06:55:16.614786 | orchestrator | + echo 2026-04-16 06:55:16.614829 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-16 06:55:16.621367 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001996s;;;0.000000;10.000000 2026-04-16 06:55:16.621755 | orchestrator | + popd 2026-04-16 06:55:16.622112 | orchestrator | 2026-04-16 06:55:16.622156 | orchestrator | # Create backup of MariaDB database 2026-04-16 06:55:16.622169 | orchestrator | 2026-04-16 06:55:16.622180 | orchestrator | + echo 2026-04-16 06:55:16.622191 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-16 06:55:16.622202 | orchestrator | + echo 2026-04-16 06:55:16.622214 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-16 06:55:18.692716 | orchestrator | 2026-04-16 06:55:18 | INFO  | Task 85f2eeb2-d9dd-49c2-9216-4482aa56b2e6 (mariadb_backup) was prepared for execution. 2026-04-16 06:55:18.692791 | orchestrator | 2026-04-16 06:55:18 | INFO  | It takes a moment until task 85f2eeb2-d9dd-49c2-9216-4482aa56b2e6 (mariadb_backup) has been started and output is visible here. 2026-04-16 06:55:49.576965 | orchestrator | 2026-04-16 06:55:49.577105 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 06:55:49.577160 | orchestrator | 2026-04-16 06:55:49.577173 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 06:55:49.577184 | orchestrator | Thursday 16 April 2026 06:55:22 +0000 (0:00:00.171) 0:00:00.171 ******** 2026-04-16 06:55:49.577194 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:55:49.577205 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:55:49.577215 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:55:49.577278 | orchestrator | 2026-04-16 06:55:49.577289 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 06:55:49.577299 | orchestrator | Thursday 16 April 2026 06:55:23 +0000 (0:00:00.353) 0:00:00.524 ******** 2026-04-16 06:55:49.577309 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-16 06:55:49.577319 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-16 06:55:49.577329 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-16 06:55:49.577338 | orchestrator | 2026-04-16 06:55:49.577348 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-16 06:55:49.577357 | orchestrator | 2026-04-16 06:55:49.577367 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-16 06:55:49.577376 | orchestrator | Thursday 16 April 2026 06:55:23 +0000 (0:00:00.587) 0:00:01.112 ******** 2026-04-16 06:55:49.577386 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 06:55:49.577396 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 06:55:49.577406 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 06:55:49.577415 | orchestrator | 2026-04-16 06:55:49.577425 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 06:55:49.577434 | orchestrator | Thursday 16 April 2026 06:55:24 +0000 (0:00:00.415) 0:00:01.528 ******** 2026-04-16 06:55:49.577462 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 06:55:49.577473 | orchestrator | 2026-04-16 06:55:49.577485 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-16 06:55:49.577498 | orchestrator | Thursday 16 April 2026 06:55:24 +0000 (0:00:00.536) 0:00:02.064 ******** 2026-04-16 06:55:49.577509 | orchestrator | ok: [testbed-node-0] 2026-04-16 06:55:49.577520 | orchestrator | ok: [testbed-node-1] 2026-04-16 06:55:49.577531 | orchestrator | ok: [testbed-node-2] 2026-04-16 06:55:49.577542 | orchestrator | 2026-04-16 06:55:49.577553 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-16 06:55:49.577564 | orchestrator | Thursday 16 April 2026 06:55:27 +0000 (0:00:03.092) 0:00:05.156 ******** 2026-04-16 06:55:49.577575 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-16 06:55:49.577585 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-16 06:55:49.577595 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-16 06:55:49.577605 | orchestrator | mariadb_bootstrap_restart 2026-04-16 06:55:49.577615 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:55:49.577624 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:55:49.577634 | orchestrator | changed: [testbed-node-0] 2026-04-16 06:55:49.577643 | orchestrator | 2026-04-16 06:55:49.577653 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-16 06:55:49.577663 | orchestrator | skipping: no hosts matched 2026-04-16 06:55:49.577672 | orchestrator | 2026-04-16 06:55:49.577682 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-16 06:55:49.577691 | orchestrator | skipping: no hosts matched 2026-04-16 06:55:49.577701 | orchestrator | 2026-04-16 06:55:49.577710 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-16 06:55:49.577719 | orchestrator | skipping: no hosts matched 2026-04-16 06:55:49.577729 | orchestrator | 2026-04-16 06:55:49.577738 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-16 06:55:49.577748 | orchestrator | 2026-04-16 06:55:49.577757 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-16 06:55:49.577767 | orchestrator | Thursday 16 April 2026 06:55:48 +0000 (0:00:20.818) 0:00:25.975 ******** 2026-04-16 06:55:49.577776 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:55:49.577786 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:55:49.577795 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:55:49.577805 | orchestrator | 2026-04-16 06:55:49.577821 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-16 06:55:49.577831 | orchestrator | Thursday 16 April 2026 06:55:48 +0000 (0:00:00.296) 0:00:26.271 ******** 2026-04-16 06:55:49.577840 | orchestrator | skipping: [testbed-node-0] 2026-04-16 06:55:49.577849 | orchestrator | skipping: [testbed-node-1] 2026-04-16 06:55:49.577859 | orchestrator | skipping: [testbed-node-2] 2026-04-16 06:55:49.577868 | orchestrator | 2026-04-16 06:55:49.577878 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 06:55:49.577889 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 06:55:49.577900 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 06:55:49.577910 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 06:55:49.577920 | orchestrator | 2026-04-16 06:55:49.577929 | orchestrator | 2026-04-16 06:55:49.577939 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 06:55:49.577949 | orchestrator | Thursday 16 April 2026 06:55:49 +0000 (0:00:00.376) 0:00:26.648 ******** 2026-04-16 06:55:49.577958 | orchestrator | =============================================================================== 2026-04-16 06:55:49.577968 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 20.82s 2026-04-16 06:55:49.577997 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.09s 2026-04-16 06:55:49.578008 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-04-16 06:55:49.578075 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2026-04-16 06:55:49.578088 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2026-04-16 06:55:49.578098 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.38s 2026-04-16 06:55:49.578108 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-04-16 06:55:49.578118 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2026-04-16 06:55:49.882220 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-16 06:55:49.892030 | orchestrator | + set -e 2026-04-16 06:55:49.892147 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 06:55:49.892580 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 06:55:49.892610 | orchestrator | ++ INTERACTIVE=false 2026-04-16 06:55:49.892621 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 06:55:49.892632 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 06:55:49.892786 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 06:55:49.894503 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 06:55:49.902497 | orchestrator | 2026-04-16 06:55:49.902595 | orchestrator | # OpenStack endpoints 2026-04-16 06:55:49.902609 | orchestrator | 2026-04-16 06:55:49.902620 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 06:55:49.902631 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 06:55:49.902640 | orchestrator | + export OS_CLOUD=admin 2026-04-16 06:55:49.902650 | orchestrator | + OS_CLOUD=admin 2026-04-16 06:55:49.902659 | orchestrator | + echo 2026-04-16 06:55:49.902669 | orchestrator | + echo '# OpenStack endpoints' 2026-04-16 06:55:49.902679 | orchestrator | + echo 2026-04-16 06:55:49.902688 | orchestrator | + openstack endpoint list 2026-04-16 06:55:53.051306 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-16 06:55:53.051419 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-16 06:55:53.051434 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-16 06:55:53.051469 | orchestrator | | 0500898b72494e67a66527e83d465096 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-16 06:55:53.051480 | orchestrator | | 069306ea675441f789f76d67eb2156ec | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-16 06:55:53.051491 | orchestrator | | 06b6f8d54b0f440e95330358184df545 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-16 06:55:53.051502 | orchestrator | | 0ad37a9ae5b146589bfb298dd2604f63 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-16 06:55:53.051513 | orchestrator | | 0ad7bb9c5cd14714b50de15febf4e89a | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-16 06:55:53.051523 | orchestrator | | 0db8f7b3aaff4c0299025e8ad000e199 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-16 06:55:53.051535 | orchestrator | | 184993b154bf4afaab303323783c2473 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-16 06:55:53.051546 | orchestrator | | 2074199def2442bd8b59082bca6dcb42 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-16 06:55:53.051556 | orchestrator | | 5b33b192d06d4d4d83f474af33e226e3 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-16 06:55:53.051567 | orchestrator | | 680229dfa0c646129ec105f6c0a5ab93 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-16 06:55:53.051596 | orchestrator | | 6bff20ec606d42b29783c544cba87411 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-16 06:55:53.051607 | orchestrator | | 8b25d74e20354c53876924282c47c3fb | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-16 06:55:53.051618 | orchestrator | | 8db78846009e487595dbd676fd1491dc | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-16 06:55:53.051628 | orchestrator | | 8e6509a9730c4f6f9466e439e43a06d3 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-16 06:55:53.051639 | orchestrator | | 925bc22b4a354eee870c2b33096a30f4 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-16 06:55:53.051649 | orchestrator | | 9da1f316b8e449ffb4e585f47af769c6 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-16 06:55:53.051660 | orchestrator | | 9ecc93099a314468be4fe875c86fff2b | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-16 06:55:53.051670 | orchestrator | | a2416aa55be84af3b9ae50b9db7754d0 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-16 06:55:53.051681 | orchestrator | | a2acda1f26584568afabe0d9f2e7881f | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-16 06:55:53.051692 | orchestrator | | a5d8f1232cb74105ac18c8819fb2d9fb | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-16 06:55:53.051729 | orchestrator | | a839c31795a94e2392667e82fe207479 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-16 06:55:53.051742 | orchestrator | | b6c7afe174fc4a6bba7907bf72a85ae6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-16 06:55:53.051758 | orchestrator | | b9ad8886dca44a258d94e1fd863fd072 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-16 06:55:53.051770 | orchestrator | | c6a57eb8e89146a3af49f300a8b56ef0 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-16 06:55:53.051780 | orchestrator | | c6def6e9228f4414b68b144c416ac6a2 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-16 06:55:53.051791 | orchestrator | | cbab18a249704eb99fcca0c34d4e7415 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-16 06:55:53.051804 | orchestrator | | d5ef7b864e234a7e88c9937bc419e2ee | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-16 06:55:53.051816 | orchestrator | | ddfc02f6910f4d718509571043b2decb | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-16 06:55:53.051828 | orchestrator | | e85e073e0e23418f9ee74af3f3730d51 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-16 06:55:53.051841 | orchestrator | | fde1fd9406dc40bdbf82693b39380388 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-16 06:55:53.051853 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-16 06:55:53.318902 | orchestrator | 2026-04-16 06:55:53.319006 | orchestrator | # Cinder 2026-04-16 06:55:53.319022 | orchestrator | 2026-04-16 06:55:53.319033 | orchestrator | + echo 2026-04-16 06:55:53.319045 | orchestrator | + echo '# Cinder' 2026-04-16 06:55:53.319057 | orchestrator | + echo 2026-04-16 06:55:53.319067 | orchestrator | + openstack volume service list 2026-04-16 06:55:55.872430 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-16 06:55:55.872524 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-16 06:55:55.872536 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-16 06:55:55.872546 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-16T06:55:48.000000 | 2026-04-16 06:55:55.872555 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-16T06:55:48.000000 | 2026-04-16 06:55:55.872564 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-16T06:55:47.000000 | 2026-04-16 06:55:55.872572 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-16T06:55:47.000000 | 2026-04-16 06:55:55.872581 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-16T06:55:53.000000 | 2026-04-16 06:55:55.872589 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-16T06:55:53.000000 | 2026-04-16 06:55:55.872598 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-16T06:55:50.000000 | 2026-04-16 06:55:55.872606 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-16T06:55:52.000000 | 2026-04-16 06:55:55.872615 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-16T06:55:53.000000 | 2026-04-16 06:55:55.872646 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-16 06:55:56.122672 | orchestrator | 2026-04-16 06:55:56.122767 | orchestrator | # Neutron 2026-04-16 06:55:56.122781 | orchestrator | 2026-04-16 06:55:56.122792 | orchestrator | + echo 2026-04-16 06:55:56.122803 | orchestrator | + echo '# Neutron' 2026-04-16 06:55:56.122813 | orchestrator | + echo 2026-04-16 06:55:56.122823 | orchestrator | + openstack network agent list 2026-04-16 06:55:58.653392 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-16 06:55:58.653476 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-16 06:55:58.653482 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-16 06:55:58.653486 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-16 06:55:58.653490 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-16 06:55:58.653494 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-16 06:55:58.653511 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-16 06:55:58.653515 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-16 06:55:58.653546 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-16 06:55:58.653551 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-16 06:55:58.653555 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-16 06:55:58.653559 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-16 06:55:58.653563 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-16 06:55:58.918221 | orchestrator | + openstack network service provider list 2026-04-16 06:56:01.393455 | orchestrator | +---------------+------+---------+ 2026-04-16 06:56:01.393556 | orchestrator | | Service Type | Name | Default | 2026-04-16 06:56:01.393570 | orchestrator | +---------------+------+---------+ 2026-04-16 06:56:01.393581 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-16 06:56:01.393592 | orchestrator | +---------------+------+---------+ 2026-04-16 06:56:01.649490 | orchestrator | 2026-04-16 06:56:01.649603 | orchestrator | # Nova 2026-04-16 06:56:01.649619 | orchestrator | 2026-04-16 06:56:01.649630 | orchestrator | + echo 2026-04-16 06:56:01.649641 | orchestrator | + echo '# Nova' 2026-04-16 06:56:01.649652 | orchestrator | + echo 2026-04-16 06:56:01.649664 | orchestrator | + openstack compute service list 2026-04-16 06:56:04.238003 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-16 06:56:04.238187 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-16 06:56:04.238207 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-16 06:56:04.238221 | orchestrator | | ff8b8457-122f-4eac-a7b0-3c42b7a5c514 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-16T06:55:57.000000 | 2026-04-16 06:56:04.239127 | orchestrator | | 6d603e4e-11e7-4f03-b5c1-46c85270c3ab | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-16T06:56:02.000000 | 2026-04-16 06:56:04.239182 | orchestrator | | c0e02b1f-ee70-4e3e-aaf6-7815f95e1c5c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-16T06:56:02.000000 | 2026-04-16 06:56:04.239195 | orchestrator | | 81d64790-9d7b-46cc-bd1f-bea049f89bba | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-16T06:56:03.000000 | 2026-04-16 06:56:04.239205 | orchestrator | | e558ce6e-5d14-4afe-bcd7-2cc8ab725d20 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-16T06:55:54.000000 | 2026-04-16 06:56:04.239214 | orchestrator | | 2bafa520-39e5-4883-986a-1fb1473a54ce | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-16T06:55:54.000000 | 2026-04-16 06:56:04.239222 | orchestrator | | 0e0a26f6-ed62-4403-b6da-b173c985f82a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-16T06:56:01.000000 | 2026-04-16 06:56:04.239231 | orchestrator | | 513571f2-7530-4be7-bc3e-66968a866aa6 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-16T06:56:01.000000 | 2026-04-16 06:56:04.239239 | orchestrator | | 3dfd95b4-aed1-4606-a552-2f306241baee | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-16T06:56:02.000000 | 2026-04-16 06:56:04.239270 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-16 06:56:04.486760 | orchestrator | + openstack hypervisor list 2026-04-16 06:56:07.572671 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-16 06:56:07.572759 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-16 06:56:07.572769 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-16 06:56:07.572777 | orchestrator | | 143cc446-be71-4704-abe3-ced7dfdfbbd7 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-16 06:56:07.572784 | orchestrator | | 628f7dce-e1f5-421c-9a4d-3edb027a67e0 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-16 06:56:07.572791 | orchestrator | | 4eb030e4-49b3-4ef5-99c5-eadbebccaf96 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-16 06:56:07.572799 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-16 06:56:07.799986 | orchestrator | 2026-04-16 06:56:07.800116 | orchestrator | # Run OpenStack test play 2026-04-16 06:56:07.800132 | orchestrator | 2026-04-16 06:56:07.800144 | orchestrator | + echo 2026-04-16 06:56:07.800155 | orchestrator | + echo '# Run OpenStack test play' 2026-04-16 06:56:07.800171 | orchestrator | + echo 2026-04-16 06:56:07.800183 | orchestrator | + osism apply --environment openstack test 2026-04-16 06:56:09.612783 | orchestrator | 2026-04-16 06:56:09 | INFO  | Trying to run play test in environment openstack 2026-04-16 06:56:09.659529 | orchestrator | 2026-04-16 06:56:09 | INFO  | Task 691d3ce0-ed8c-452c-8ec2-d87c267d9d1f (test) was prepared for execution. 2026-04-16 06:56:09.659625 | orchestrator | 2026-04-16 06:56:09 | INFO  | It takes a moment until task 691d3ce0-ed8c-452c-8ec2-d87c267d9d1f (test) has been started and output is visible here. 2026-04-16 06:59:18.123892 | orchestrator | 2026-04-16 06:59:18.124009 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-16 06:59:18.124021 | orchestrator | 2026-04-16 06:59:18.124028 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-16 06:59:18.124036 | orchestrator | Thursday 16 April 2026 06:56:13 +0000 (0:00:00.062) 0:00:00.062 ******** 2026-04-16 06:59:18.124043 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124051 | orchestrator | 2026-04-16 06:59:18.124058 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-16 06:59:18.124065 | orchestrator | Thursday 16 April 2026 06:56:16 +0000 (0:00:03.211) 0:00:03.274 ******** 2026-04-16 06:59:18.124071 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124099 | orchestrator | 2026-04-16 06:59:18.124107 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-16 06:59:18.124113 | orchestrator | Thursday 16 April 2026 06:56:20 +0000 (0:00:03.907) 0:00:07.181 ******** 2026-04-16 06:59:18.124120 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124127 | orchestrator | 2026-04-16 06:59:18.124133 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-16 06:59:18.124140 | orchestrator | Thursday 16 April 2026 06:56:26 +0000 (0:00:06.161) 0:00:13.342 ******** 2026-04-16 06:59:18.124147 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124153 | orchestrator | 2026-04-16 06:59:18.124160 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-16 06:59:18.124167 | orchestrator | Thursday 16 April 2026 06:56:30 +0000 (0:00:03.749) 0:00:17.092 ******** 2026-04-16 06:59:18.124173 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124180 | orchestrator | 2026-04-16 06:59:18.124187 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-16 06:59:18.124193 | orchestrator | Thursday 16 April 2026 06:56:34 +0000 (0:00:03.973) 0:00:21.066 ******** 2026-04-16 06:59:18.124200 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-16 06:59:18.124207 | orchestrator | changed: [localhost] => (item=member) 2026-04-16 06:59:18.124215 | orchestrator | changed: [localhost] => (item=creator) 2026-04-16 06:59:18.124222 | orchestrator | 2026-04-16 06:59:18.124228 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-16 06:59:18.124235 | orchestrator | Thursday 16 April 2026 06:56:45 +0000 (0:00:11.030) 0:00:32.097 ******** 2026-04-16 06:59:18.124242 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124248 | orchestrator | 2026-04-16 06:59:18.124255 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-16 06:59:18.124262 | orchestrator | Thursday 16 April 2026 06:56:49 +0000 (0:00:04.118) 0:00:36.215 ******** 2026-04-16 06:59:18.124268 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124275 | orchestrator | 2026-04-16 06:59:18.124281 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-16 06:59:18.124288 | orchestrator | Thursday 16 April 2026 06:56:54 +0000 (0:00:04.678) 0:00:40.894 ******** 2026-04-16 06:59:18.124294 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124301 | orchestrator | 2026-04-16 06:59:18.124308 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-16 06:59:18.124314 | orchestrator | Thursday 16 April 2026 06:56:58 +0000 (0:00:04.163) 0:00:45.057 ******** 2026-04-16 06:59:18.124355 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124363 | orchestrator | 2026-04-16 06:59:18.124369 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-16 06:59:18.124376 | orchestrator | Thursday 16 April 2026 06:57:02 +0000 (0:00:03.842) 0:00:48.900 ******** 2026-04-16 06:59:18.124383 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124389 | orchestrator | 2026-04-16 06:59:18.124396 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-16 06:59:18.124402 | orchestrator | Thursday 16 April 2026 06:57:06 +0000 (0:00:03.941) 0:00:52.842 ******** 2026-04-16 06:59:18.124409 | orchestrator | changed: [localhost] 2026-04-16 06:59:18.124415 | orchestrator | 2026-04-16 06:59:18.124423 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-16 06:59:18.124430 | orchestrator | Thursday 16 April 2026 06:57:09 +0000 (0:00:03.665) 0:00:56.507 ******** 2026-04-16 06:59:18.124438 | orchestrator | changed: [localhost] => (item={'name': 'test-1'}) 2026-04-16 06:59:18.124446 | orchestrator | changed: [localhost] => (item={'name': 'test-2'}) 2026-04-16 06:59:18.124453 | orchestrator | changed: [localhost] => (item={'name': 'test-3'}) 2026-04-16 06:59:18.124461 | orchestrator | 2026-04-16 06:59:18.124469 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-16 06:59:18.124477 | orchestrator | Thursday 16 April 2026 06:57:22 +0000 (0:00:12.987) 0:01:09.495 ******** 2026-04-16 06:59:18.124491 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-16 06:59:18.124499 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-16 06:59:18.124507 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-16 06:59:18.124514 | orchestrator | 2026-04-16 06:59:18.124522 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-16 06:59:18.124529 | orchestrator | Thursday 16 April 2026 06:57:38 +0000 (0:00:15.874) 0:01:25.370 ******** 2026-04-16 06:59:18.124537 | orchestrator | changed: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-16 06:59:18.124545 | orchestrator | changed: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-16 06:59:18.124565 | orchestrator | changed: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-16 06:59:18.124572 | orchestrator | 2026-04-16 06:59:18.124580 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-16 06:59:18.124587 | orchestrator | 2026-04-16 06:59:18.124595 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-16 06:59:18.124616 | orchestrator | Thursday 16 April 2026 06:58:08 +0000 (0:00:29.921) 0:01:55.291 ******** 2026-04-16 06:59:18.124624 | orchestrator | ok: [localhost] 2026-04-16 06:59:18.124632 | orchestrator | 2026-04-16 06:59:18.124639 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-16 06:59:18.124647 | orchestrator | Thursday 16 April 2026 06:58:11 +0000 (0:00:03.436) 0:01:58.727 ******** 2026-04-16 06:59:18.124654 | orchestrator | skipping: [localhost] 2026-04-16 06:59:18.124662 | orchestrator | 2026-04-16 06:59:18.124669 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-16 06:59:18.124677 | orchestrator | Thursday 16 April 2026 06:58:11 +0000 (0:00:00.063) 0:01:58.791 ******** 2026-04-16 06:59:18.124684 | orchestrator | skipping: [localhost] 2026-04-16 06:59:18.124692 | orchestrator | 2026-04-16 06:59:18.124700 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-16 06:59:18.124707 | orchestrator | Thursday 16 April 2026 06:58:12 +0000 (0:00:00.046) 0:01:58.838 ******** 2026-04-16 06:59:18.124715 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-16 06:59:18.124723 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-16 06:59:18.124730 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-16 06:59:18.124738 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-16 06:59:18.124745 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-16 06:59:18.124753 | orchestrator | skipping: [localhost] 2026-04-16 06:59:18.124761 | orchestrator | 2026-04-16 06:59:18.124768 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-16 06:59:18.124775 | orchestrator | Thursday 16 April 2026 06:58:12 +0000 (0:00:00.175) 0:01:59.014 ******** 2026-04-16 06:59:18.124783 | orchestrator | skipping: [localhost] 2026-04-16 06:59:18.124791 | orchestrator | 2026-04-16 06:59:18.124798 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-16 06:59:18.124805 | orchestrator | Thursday 16 April 2026 06:58:12 +0000 (0:00:00.147) 0:01:59.161 ******** 2026-04-16 06:59:18.124811 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 06:59:18.124818 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 06:59:18.124824 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 06:59:18.124831 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 06:59:18.124837 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 06:59:18.124849 | orchestrator | 2026-04-16 06:59:18.124855 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-16 06:59:18.124862 | orchestrator | Thursday 16 April 2026 06:58:16 +0000 (0:00:04.339) 0:02:03.501 ******** 2026-04-16 06:59:18.124868 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-16 06:59:18.124876 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-04-16 06:59:18.124883 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-04-16 06:59:18.124889 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-04-16 06:59:18.124898 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j753877781823.3643', 'results_file': '/ansible/.ansible_async/j753877781823.3643', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 06:59:18.124908 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j980479378887.3668', 'results_file': '/ansible/.ansible_async/j980479378887.3668', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 06:59:18.124914 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-16 06:59:18.124921 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j90430024112.3693', 'results_file': '/ansible/.ansible_async/j90430024112.3693', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 06:59:18.124928 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j605255544137.3718', 'results_file': '/ansible/.ansible_async/j605255544137.3718', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 06:59:18.124935 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j103918543740.3743', 'results_file': '/ansible/.ansible_async/j103918543740.3743', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-16 06:59:18.124942 | orchestrator | 2026-04-16 06:59:18.124949 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-16 06:59:18.124956 | orchestrator | Thursday 16 April 2026 06:59:13 +0000 (0:00:56.928) 0:03:00.430 ******** 2026-04-16 06:59:18.124966 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 07:00:23.074673 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 07:00:23.074784 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 07:00:23.074798 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 07:00:23.074808 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 07:00:23.074819 | orchestrator | 2026-04-16 07:00:23.074830 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-16 07:00:23.074840 | orchestrator | Thursday 16 April 2026 06:59:18 +0000 (0:00:04.474) 0:03:04.904 ******** 2026-04-16 07:00:23.074850 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-16 07:00:23.074862 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j250294852201.3854', 'results_file': '/ansible/.ansible_async/j250294852201.3854', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.074875 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j378613088634.3879', 'results_file': '/ansible/.ansible_async/j378613088634.3879', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.074906 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j904183382095.3904', 'results_file': '/ansible/.ansible_async/j904183382095.3904', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.074917 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j266792259560.3929', 'results_file': '/ansible/.ansible_async/j266792259560.3929', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.074943 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j618191576346.3954', 'results_file': '/ansible/.ansible_async/j618191576346.3954', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.074954 | orchestrator | 2026-04-16 07:00:23.074964 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-16 07:00:23.074974 | orchestrator | Thursday 16 April 2026 06:59:27 +0000 (0:00:08.937) 0:03:13.841 ******** 2026-04-16 07:00:23.074983 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 07:00:23.074993 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 07:00:23.075002 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 07:00:23.075012 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 07:00:23.075022 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 07:00:23.075032 | orchestrator | 2026-04-16 07:00:23.075042 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-16 07:00:23.075051 | orchestrator | Thursday 16 April 2026 06:59:31 +0000 (0:00:04.720) 0:03:18.562 ******** 2026-04-16 07:00:23.075061 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-16 07:00:23.075071 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j818655765868.4029', 'results_file': '/ansible/.ansible_async/j818655765868.4029', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.075081 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j908828502331.4054', 'results_file': '/ansible/.ansible_async/j908828502331.4054', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.075092 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j920843974366.4080', 'results_file': '/ansible/.ansible_async/j920843974366.4080', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.075106 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j894644693263.4106', 'results_file': '/ansible/.ansible_async/j894644693263.4106', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.075133 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j843745297680.4132', 'results_file': '/ansible/.ansible_async/j843745297680.4132', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-16 07:00:23.075143 | orchestrator | 2026-04-16 07:00:23.075153 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-16 07:00:23.075163 | orchestrator | Thursday 16 April 2026 06:59:40 +0000 (0:00:08.989) 0:03:27.552 ******** 2026-04-16 07:00:23.075173 | orchestrator | changed: [localhost] 2026-04-16 07:00:23.075191 | orchestrator | 2026-04-16 07:00:23.075201 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-16 07:00:23.075210 | orchestrator | Thursday 16 April 2026 06:59:46 +0000 (0:00:05.481) 0:03:33.033 ******** 2026-04-16 07:00:23.075220 | orchestrator | changed: [localhost] 2026-04-16 07:00:23.075230 | orchestrator | 2026-04-16 07:00:23.075239 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-16 07:00:23.075249 | orchestrator | Thursday 16 April 2026 06:59:59 +0000 (0:00:13.101) 0:03:46.135 ******** 2026-04-16 07:00:23.075259 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 07:00:23.075269 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 07:00:23.075279 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 07:00:23.075289 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 07:00:23.075298 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 07:00:23.075308 | orchestrator | 2026-04-16 07:00:23.075317 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-16 07:00:23.075327 | orchestrator | Thursday 16 April 2026 07:00:22 +0000 (0:00:23.366) 0:04:09.502 ******** 2026-04-16 07:00:23.075336 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-16 07:00:23.075383 | orchestrator |  "msg": "test: 192.168.112.178" 2026-04-16 07:00:23.075401 | orchestrator | } 2026-04-16 07:00:23.075419 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-16 07:00:23.075436 | orchestrator |  "msg": "test-1: 192.168.112.118" 2026-04-16 07:00:23.075454 | orchestrator | } 2026-04-16 07:00:23.075470 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-16 07:00:23.075488 | orchestrator |  "msg": "test-2: 192.168.112.158" 2026-04-16 07:00:23.075502 | orchestrator | } 2026-04-16 07:00:23.075512 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-16 07:00:23.075521 | orchestrator |  "msg": "test-3: 192.168.112.131" 2026-04-16 07:00:23.075531 | orchestrator | } 2026-04-16 07:00:23.075540 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-16 07:00:23.075550 | orchestrator |  "msg": "test-4: 192.168.112.133" 2026-04-16 07:00:23.075560 | orchestrator | } 2026-04-16 07:00:23.075569 | orchestrator | 2026-04-16 07:00:23.075579 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:00:23.075589 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:00:23.075600 | orchestrator | 2026-04-16 07:00:23.075609 | orchestrator | 2026-04-16 07:00:23.075619 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:00:23.075629 | orchestrator | Thursday 16 April 2026 07:00:22 +0000 (0:00:00.125) 0:04:09.627 ******** 2026-04-16 07:00:23.075639 | orchestrator | =============================================================================== 2026-04-16 07:00:23.075648 | orchestrator | Wait for instance creation to complete --------------------------------- 56.93s 2026-04-16 07:00:23.075658 | orchestrator | Create test routers ---------------------------------------------------- 29.92s 2026-04-16 07:00:23.075667 | orchestrator | Create floating ip addresses ------------------------------------------- 23.37s 2026-04-16 07:00:23.075677 | orchestrator | Create test subnets ---------------------------------------------------- 15.87s 2026-04-16 07:00:23.075686 | orchestrator | Attach test volume ----------------------------------------------------- 13.10s 2026-04-16 07:00:23.075696 | orchestrator | Create test networks --------------------------------------------------- 12.99s 2026-04-16 07:00:23.075705 | orchestrator | Add member roles to user test ------------------------------------------ 11.03s 2026-04-16 07:00:23.075715 | orchestrator | Wait for tags to be added ----------------------------------------------- 8.99s 2026-04-16 07:00:23.075725 | orchestrator | Wait for metadata to be added ------------------------------------------- 8.94s 2026-04-16 07:00:23.075734 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.16s 2026-04-16 07:00:23.075752 | orchestrator | Create test volume ------------------------------------------------------ 5.48s 2026-04-16 07:00:23.075761 | orchestrator | Add tag to instances ---------------------------------------------------- 4.72s 2026-04-16 07:00:23.075771 | orchestrator | Create ssh security group ----------------------------------------------- 4.68s 2026-04-16 07:00:23.075780 | orchestrator | Add metadata to instances ----------------------------------------------- 4.47s 2026-04-16 07:00:23.075790 | orchestrator | Create test instances --------------------------------------------------- 4.34s 2026-04-16 07:00:23.075799 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.16s 2026-04-16 07:00:23.075809 | orchestrator | Create test server group ------------------------------------------------ 4.12s 2026-04-16 07:00:23.075819 | orchestrator | Create test user -------------------------------------------------------- 3.97s 2026-04-16 07:00:23.075828 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.94s 2026-04-16 07:00:23.075843 | orchestrator | Create test-admin user -------------------------------------------------- 3.91s 2026-04-16 07:00:23.385306 | orchestrator | + server_list 2026-04-16 07:00:23.385475 | orchestrator | + openstack --os-cloud test server list 2026-04-16 07:00:27.289899 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-16 07:00:27.290003 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-16 07:00:27.290071 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-16 07:00:27.290084 | orchestrator | | 9f6d4eae-4a86-4bfc-8f3e-fcf038d2b61a | test-3 | ACTIVE | test-2=192.168.112.131, 192.168.201.6 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 07:00:27.290095 | orchestrator | | 65387079-4a6d-4b42-a28f-18ce145e99be | test-1 | ACTIVE | test-1=192.168.112.118, 192.168.200.117 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 07:00:27.290106 | orchestrator | | bf38d081-30f3-4ba0-bd0b-569f582e4d57 | test-2 | ACTIVE | test-2=192.168.112.158, 192.168.201.183 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 07:00:27.290116 | orchestrator | | f6a993c3-6e61-45fa-88ca-020d2ea97cc4 | test-4 | ACTIVE | test-3=192.168.112.133, 192.168.202.143 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 07:00:27.290126 | orchestrator | | f51d122f-e34a-402a-a9c1-9b7037551377 | test | ACTIVE | test-1=192.168.112.178, 192.168.200.155 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 07:00:27.290136 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-16 07:00:27.545791 | orchestrator | + openstack --os-cloud test server show test 2026-04-16 07:00:30.922312 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:30.922501 | orchestrator | | Field | Value | 2026-04-16 07:00:30.922534 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:30.922579 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 07:00:30.922592 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 07:00:30.922604 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 07:00:30.922620 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-16 07:00:30.922660 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 07:00:30.922672 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 07:00:30.922703 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 07:00:30.922715 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 07:00:30.922727 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 07:00:30.922751 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 07:00:30.922763 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 07:00:30.922774 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 07:00:30.922785 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 07:00:30.922801 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 07:00:30.922813 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 07:00:30.922825 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:47.000000 | 2026-04-16 07:00:30.922845 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 07:00:30.922858 | orchestrator | | accessIPv4 | | 2026-04-16 07:00:30.922878 | orchestrator | | accessIPv6 | | 2026-04-16 07:00:30.922908 | orchestrator | | addresses | test-1=192.168.112.178, 192.168.200.155 | 2026-04-16 07:00:30.922928 | orchestrator | | config_drive | | 2026-04-16 07:00:30.922946 | orchestrator | | created | 2026-04-16T06:58:21Z | 2026-04-16 07:00:30.922965 | orchestrator | | description | None | 2026-04-16 07:00:30.922991 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 07:00:30.923010 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 07:00:30.923029 | orchestrator | | host_status | None | 2026-04-16 07:00:30.923059 | orchestrator | | id | f51d122f-e34a-402a-a9c1-9b7037551377 | 2026-04-16 07:00:30.923082 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 07:00:30.923113 | orchestrator | | key_name | test | 2026-04-16 07:00:30.923130 | orchestrator | | locked | False | 2026-04-16 07:00:30.923144 | orchestrator | | locked_reason | None | 2026-04-16 07:00:30.923157 | orchestrator | | name | test | 2026-04-16 07:00:30.923170 | orchestrator | | pinned_availability_zone | None | 2026-04-16 07:00:30.923183 | orchestrator | | progress | 0 | 2026-04-16 07:00:30.923205 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 07:00:30.923217 | orchestrator | | properties | hostname='test' | 2026-04-16 07:00:30.923235 | orchestrator | | security_groups | name='icmp' | 2026-04-16 07:00:30.923253 | orchestrator | | | name='ssh' | 2026-04-16 07:00:30.923265 | orchestrator | | server_groups | None | 2026-04-16 07:00:30.923276 | orchestrator | | status | ACTIVE | 2026-04-16 07:00:30.923287 | orchestrator | | tags | test | 2026-04-16 07:00:30.923298 | orchestrator | | trusted_image_certificates | None | 2026-04-16 07:00:30.923309 | orchestrator | | updated | 2026-04-16T06:59:19Z | 2026-04-16 07:00:30.923330 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 07:00:30.923341 | orchestrator | | volumes_attached | delete_on_termination='True', id='be2fc687-3ec6-4504-a718-ce1c777b157a' | 2026-04-16 07:00:30.923418 | orchestrator | | | delete_on_termination='False', id='7182495a-af96-4f93-b9db-43724d69937e' | 2026-04-16 07:00:30.925556 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:31.194445 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-16 07:00:34.082254 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:34.082404 | orchestrator | | Field | Value | 2026-04-16 07:00:34.082420 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:34.082429 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 07:00:34.082437 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 07:00:34.082461 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 07:00:34.082469 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-16 07:00:34.082476 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 07:00:34.082483 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 07:00:34.082524 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 07:00:34.082532 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 07:00:34.082539 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 07:00:34.082547 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 07:00:34.082554 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 07:00:34.082562 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 07:00:34.082573 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 07:00:34.082581 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 07:00:34.082588 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 07:00:34.082601 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:46.000000 | 2026-04-16 07:00:34.082614 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 07:00:34.082621 | orchestrator | | accessIPv4 | | 2026-04-16 07:00:34.082629 | orchestrator | | accessIPv6 | | 2026-04-16 07:00:34.082636 | orchestrator | | addresses | test-1=192.168.112.118, 192.168.200.117 | 2026-04-16 07:00:34.082644 | orchestrator | | config_drive | | 2026-04-16 07:00:34.082651 | orchestrator | | created | 2026-04-16T06:58:22Z | 2026-04-16 07:00:34.082662 | orchestrator | | description | None | 2026-04-16 07:00:34.082669 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 07:00:34.082681 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 07:00:34.082689 | orchestrator | | host_status | None | 2026-04-16 07:00:34.082701 | orchestrator | | id | 65387079-4a6d-4b42-a28f-18ce145e99be | 2026-04-16 07:00:34.082709 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 07:00:34.082716 | orchestrator | | key_name | test | 2026-04-16 07:00:34.082724 | orchestrator | | locked | False | 2026-04-16 07:00:34.082731 | orchestrator | | locked_reason | None | 2026-04-16 07:00:34.082739 | orchestrator | | name | test-1 | 2026-04-16 07:00:34.082749 | orchestrator | | pinned_availability_zone | None | 2026-04-16 07:00:34.082761 | orchestrator | | progress | 0 | 2026-04-16 07:00:34.082768 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 07:00:34.082775 | orchestrator | | properties | hostname='test-1' | 2026-04-16 07:00:34.082788 | orchestrator | | security_groups | name='icmp' | 2026-04-16 07:00:34.082795 | orchestrator | | | name='ssh' | 2026-04-16 07:00:34.082803 | orchestrator | | server_groups | None | 2026-04-16 07:00:34.082810 | orchestrator | | status | ACTIVE | 2026-04-16 07:00:34.082817 | orchestrator | | tags | test | 2026-04-16 07:00:34.082824 | orchestrator | | trusted_image_certificates | None | 2026-04-16 07:00:34.082832 | orchestrator | | updated | 2026-04-16T06:59:20Z | 2026-04-16 07:00:34.082845 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 07:00:34.082852 | orchestrator | | volumes_attached | delete_on_termination='True', id='2b50d504-0938-40ca-b5f4-01df85085085' | 2026-04-16 07:00:34.084883 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:34.344582 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-16 07:00:37.509161 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:37.509285 | orchestrator | | Field | Value | 2026-04-16 07:00:37.509302 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:37.509313 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 07:00:37.509564 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 07:00:37.509587 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 07:00:37.509647 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-16 07:00:37.509667 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 07:00:37.509685 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 07:00:37.509729 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 07:00:37.509748 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 07:00:37.509766 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 07:00:37.509785 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 07:00:37.509803 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 07:00:37.509821 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 07:00:37.509853 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 07:00:37.509879 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 07:00:37.509898 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 07:00:37.509916 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:49.000000 | 2026-04-16 07:00:37.509944 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 07:00:37.510107 | orchestrator | | accessIPv4 | | 2026-04-16 07:00:37.510131 | orchestrator | | accessIPv6 | | 2026-04-16 07:00:37.510147 | orchestrator | | addresses | test-2=192.168.112.158, 192.168.201.183 | 2026-04-16 07:00:37.510165 | orchestrator | | config_drive | | 2026-04-16 07:00:37.510195 | orchestrator | | created | 2026-04-16T06:58:22Z | 2026-04-16 07:00:37.510213 | orchestrator | | description | None | 2026-04-16 07:00:37.510238 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 07:00:37.510257 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 07:00:37.510274 | orchestrator | | host_status | None | 2026-04-16 07:00:37.510303 | orchestrator | | id | bf38d081-30f3-4ba0-bd0b-569f582e4d57 | 2026-04-16 07:00:37.510322 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 07:00:37.510340 | orchestrator | | key_name | test | 2026-04-16 07:00:37.510380 | orchestrator | | locked | False | 2026-04-16 07:00:37.510407 | orchestrator | | locked_reason | None | 2026-04-16 07:00:37.510424 | orchestrator | | name | test-2 | 2026-04-16 07:00:37.510440 | orchestrator | | pinned_availability_zone | None | 2026-04-16 07:00:37.510451 | orchestrator | | progress | 0 | 2026-04-16 07:00:37.510461 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 07:00:37.510471 | orchestrator | | properties | hostname='test-2' | 2026-04-16 07:00:37.510488 | orchestrator | | security_groups | name='icmp' | 2026-04-16 07:00:37.510499 | orchestrator | | | name='ssh' | 2026-04-16 07:00:37.510509 | orchestrator | | server_groups | None | 2026-04-16 07:00:37.510519 | orchestrator | | status | ACTIVE | 2026-04-16 07:00:37.510535 | orchestrator | | tags | test | 2026-04-16 07:00:37.510545 | orchestrator | | trusted_image_certificates | None | 2026-04-16 07:00:37.510559 | orchestrator | | updated | 2026-04-16T06:59:20Z | 2026-04-16 07:00:37.510570 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 07:00:37.510579 | orchestrator | | volumes_attached | delete_on_termination='True', id='cac65a6c-1904-4384-9ed3-01b5feac6425' | 2026-04-16 07:00:37.514523 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:37.765104 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-16 07:00:40.495282 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:40.495431 | orchestrator | | Field | Value | 2026-04-16 07:00:40.495454 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:40.495486 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 07:00:40.495496 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 07:00:40.495504 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 07:00:40.495525 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-16 07:00:40.495534 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 07:00:40.495542 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 07:00:40.495566 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 07:00:40.495575 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 07:00:40.495584 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 07:00:40.495598 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 07:00:40.495606 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 07:00:40.495615 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 07:00:40.495623 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 07:00:40.495632 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 07:00:40.495641 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 07:00:40.495650 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:49.000000 | 2026-04-16 07:00:40.495663 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 07:00:40.495672 | orchestrator | | accessIPv4 | | 2026-04-16 07:00:40.495692 | orchestrator | | accessIPv6 | | 2026-04-16 07:00:40.495700 | orchestrator | | addresses | test-2=192.168.112.131, 192.168.201.6 | 2026-04-16 07:00:40.496052 | orchestrator | | config_drive | | 2026-04-16 07:00:40.496066 | orchestrator | | created | 2026-04-16T06:58:23Z | 2026-04-16 07:00:40.496076 | orchestrator | | description | None | 2026-04-16 07:00:40.496086 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 07:00:40.496096 | orchestrator | | hostId | 4544a6d235a3a35eb0870b8bd6995d405d7ec82b14225f1eab019f4a | 2026-04-16 07:00:40.496105 | orchestrator | | host_status | None | 2026-04-16 07:00:40.496122 | orchestrator | | id | 9f6d4eae-4a86-4bfc-8f3e-fcf038d2b61a | 2026-04-16 07:00:40.496138 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 07:00:40.496149 | orchestrator | | key_name | test | 2026-04-16 07:00:40.496158 | orchestrator | | locked | False | 2026-04-16 07:00:40.496172 | orchestrator | | locked_reason | None | 2026-04-16 07:00:40.496182 | orchestrator | | name | test-3 | 2026-04-16 07:00:40.496192 | orchestrator | | pinned_availability_zone | None | 2026-04-16 07:00:40.496202 | orchestrator | | progress | 0 | 2026-04-16 07:00:40.496211 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 07:00:40.496219 | orchestrator | | properties | hostname='test-3' | 2026-04-16 07:00:40.496233 | orchestrator | | security_groups | name='icmp' | 2026-04-16 07:00:40.496247 | orchestrator | | | name='ssh' | 2026-04-16 07:00:40.496256 | orchestrator | | server_groups | None | 2026-04-16 07:00:40.496264 | orchestrator | | status | ACTIVE | 2026-04-16 07:00:40.496277 | orchestrator | | tags | test | 2026-04-16 07:00:40.496285 | orchestrator | | trusted_image_certificates | None | 2026-04-16 07:00:40.496294 | orchestrator | | updated | 2026-04-16T06:59:21Z | 2026-04-16 07:00:40.496302 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 07:00:40.496310 | orchestrator | | volumes_attached | delete_on_termination='True', id='b06a5d91-b80d-45ef-9265-78ba7fb45521' | 2026-04-16 07:00:40.500407 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:40.747854 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-16 07:00:43.668889 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:43.668997 | orchestrator | | Field | Value | 2026-04-16 07:00:43.669013 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:43.669026 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 07:00:43.669055 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 07:00:43.669068 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 07:00:43.669079 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-16 07:00:43.669091 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 07:00:43.669102 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 07:00:43.669152 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 07:00:43.669166 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 07:00:43.669177 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 07:00:43.669189 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 07:00:43.669200 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 07:00:43.669216 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 07:00:43.669228 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 07:00:43.669240 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 07:00:43.669251 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 07:00:43.669270 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:49.000000 | 2026-04-16 07:00:43.669288 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 07:00:43.669300 | orchestrator | | accessIPv4 | | 2026-04-16 07:00:43.669312 | orchestrator | | accessIPv6 | | 2026-04-16 07:00:43.669323 | orchestrator | | addresses | test-3=192.168.112.133, 192.168.202.143 | 2026-04-16 07:00:43.669334 | orchestrator | | config_drive | | 2026-04-16 07:00:43.669421 | orchestrator | | created | 2026-04-16T06:58:22Z | 2026-04-16 07:00:43.669436 | orchestrator | | description | None | 2026-04-16 07:00:43.669447 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 07:00:43.669467 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 07:00:43.669478 | orchestrator | | host_status | None | 2026-04-16 07:00:43.669497 | orchestrator | | id | f6a993c3-6e61-45fa-88ca-020d2ea97cc4 | 2026-04-16 07:00:43.669509 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 07:00:43.669521 | orchestrator | | key_name | test | 2026-04-16 07:00:43.669532 | orchestrator | | locked | False | 2026-04-16 07:00:43.669548 | orchestrator | | locked_reason | None | 2026-04-16 07:00:43.669560 | orchestrator | | name | test-4 | 2026-04-16 07:00:43.669571 | orchestrator | | pinned_availability_zone | None | 2026-04-16 07:00:43.669582 | orchestrator | | progress | 0 | 2026-04-16 07:00:43.669600 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 07:00:43.669612 | orchestrator | | properties | hostname='test-4' | 2026-04-16 07:00:43.669629 | orchestrator | | security_groups | name='icmp' | 2026-04-16 07:00:43.669641 | orchestrator | | | name='ssh' | 2026-04-16 07:00:43.669653 | orchestrator | | server_groups | None | 2026-04-16 07:00:43.669664 | orchestrator | | status | ACTIVE | 2026-04-16 07:00:43.669680 | orchestrator | | tags | test | 2026-04-16 07:00:43.669692 | orchestrator | | trusted_image_certificates | None | 2026-04-16 07:00:43.669703 | orchestrator | | updated | 2026-04-16T06:59:21Z | 2026-04-16 07:00:43.669721 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 07:00:43.669733 | orchestrator | | volumes_attached | delete_on_termination='True', id='dd128722-60bf-4529-8019-8b68aa8c4eba' | 2026-04-16 07:00:43.674411 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 07:00:43.943754 | orchestrator | + server_ping 2026-04-16 07:00:43.945163 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-16 07:00:43.945225 | orchestrator | ++ tr -d '\r' 2026-04-16 07:00:46.741929 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 07:00:46.742124 | orchestrator | + ping -c3 192.168.112.158 2026-04-16 07:00:46.758236 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-16 07:00:46.758327 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=7.49 ms 2026-04-16 07:00:47.756003 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=3.00 ms 2026-04-16 07:00:48.756311 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=2.12 ms 2026-04-16 07:00:48.756457 | orchestrator | 2026-04-16 07:00:48.756474 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-16 07:00:48.756487 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 07:00:48.756498 | orchestrator | rtt min/avg/max/mdev = 2.119/4.204/7.493/2.353 ms 2026-04-16 07:00:48.756838 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 07:00:48.756858 | orchestrator | + ping -c3 192.168.112.118 2026-04-16 07:00:48.770956 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-16 07:00:48.771040 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=9.76 ms 2026-04-16 07:00:49.766533 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=3.00 ms 2026-04-16 07:00:50.766427 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.69 ms 2026-04-16 07:00:50.766513 | orchestrator | 2026-04-16 07:00:50.766525 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-16 07:00:50.766534 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 07:00:50.766541 | orchestrator | rtt min/avg/max/mdev = 1.692/4.816/9.759/3.535 ms 2026-04-16 07:00:50.766856 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 07:00:50.766872 | orchestrator | + ping -c3 192.168.112.131 2026-04-16 07:00:50.783202 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-16 07:00:50.783292 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=11.3 ms 2026-04-16 07:00:51.775671 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.06 ms 2026-04-16 07:00:52.777996 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=2.21 ms 2026-04-16 07:00:52.778148 | orchestrator | 2026-04-16 07:00:52.778174 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-16 07:00:52.778230 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-16 07:00:52.778251 | orchestrator | rtt min/avg/max/mdev = 2.062/5.182/11.270/4.305 ms 2026-04-16 07:00:52.778273 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 07:00:52.778293 | orchestrator | + ping -c3 192.168.112.178 2026-04-16 07:00:52.789580 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2026-04-16 07:00:52.789670 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=8.33 ms 2026-04-16 07:00:53.785200 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.55 ms 2026-04-16 07:00:54.786487 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=2.00 ms 2026-04-16 07:00:54.786621 | orchestrator | 2026-04-16 07:00:54.786639 | orchestrator | --- 192.168.112.178 ping statistics --- 2026-04-16 07:00:54.786653 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 07:00:54.786665 | orchestrator | rtt min/avg/max/mdev = 2.001/4.294/8.333/2.864 ms 2026-04-16 07:00:54.787553 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 07:00:54.787580 | orchestrator | + ping -c3 192.168.112.133 2026-04-16 07:00:54.800875 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-16 07:00:54.800959 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=8.93 ms 2026-04-16 07:00:55.794411 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.09 ms 2026-04-16 07:00:56.795952 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.89 ms 2026-04-16 07:00:56.796084 | orchestrator | 2026-04-16 07:00:56.796102 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-16 07:00:56.796115 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-04-16 07:00:56.796127 | orchestrator | rtt min/avg/max/mdev = 1.892/4.305/8.931/3.271 ms 2026-04-16 07:00:56.796553 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 07:00:56.911554 | orchestrator | ok: Runtime: 0:08:44.598211 2026-04-16 07:00:56.952168 | 2026-04-16 07:00:56.952297 | TASK [Run tempest] 2026-04-16 07:00:57.484940 | orchestrator | skipping: Conditional result was False 2026-04-16 07:00:57.503051 | 2026-04-16 07:00:57.503265 | TASK [Check prometheus alert status] 2026-04-16 07:00:58.059600 | orchestrator | skipping: Conditional result was False 2026-04-16 07:00:58.072576 | 2026-04-16 07:00:58.072722 | PLAY [Upgrade testbed] 2026-04-16 07:00:58.083800 | 2026-04-16 07:00:58.083913 | TASK [Print next ceph version] 2026-04-16 07:00:58.162663 | orchestrator | ok 2026-04-16 07:00:58.172307 | 2026-04-16 07:00:58.172446 | TASK [Print next openstack version] 2026-04-16 07:00:58.251157 | orchestrator | ok 2026-04-16 07:00:58.262670 | 2026-04-16 07:00:58.262794 | TASK [Print next manager version] 2026-04-16 07:00:58.338511 | orchestrator | ok 2026-04-16 07:00:58.346553 | 2026-04-16 07:00:58.346681 | TASK [Set cloud fact (Zuul deployment)] 2026-04-16 07:00:58.389585 | orchestrator | ok 2026-04-16 07:00:58.398775 | 2026-04-16 07:00:58.398939 | TASK [Set cloud fact (local deployment)] 2026-04-16 07:00:58.433776 | orchestrator | skipping: Conditional result was False 2026-04-16 07:00:58.446786 | 2026-04-16 07:00:58.446952 | TASK [Fetch manager address] 2026-04-16 07:00:58.728483 | orchestrator | ok 2026-04-16 07:00:58.738144 | 2026-04-16 07:00:58.738270 | TASK [Set manager_host address] 2026-04-16 07:00:58.803391 | orchestrator | ok 2026-04-16 07:00:58.812603 | 2026-04-16 07:00:58.812741 | TASK [Run upgrade] 2026-04-16 07:00:59.526221 | orchestrator | + set -e 2026-04-16 07:00:59.526492 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-16 07:00:59.526519 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-16 07:00:59.526528 | orchestrator | + CEPH_VERSION=reef 2026-04-16 07:00:59.526536 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-16 07:00:59.526543 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-16 07:00:59.526552 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0 reef 2024.2 kolla/release' 2026-04-16 07:00:59.535040 | orchestrator | + set -e 2026-04-16 07:00:59.535137 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 07:00:59.535161 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 07:00:59.535184 | orchestrator | ++ INTERACTIVE=false 2026-04-16 07:00:59.535199 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 07:00:59.535216 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 07:00:59.536582 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-04-16 07:00:59.573462 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-04-16 07:00:59.573566 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-16 07:00:59.608070 | orchestrator | 2026-04-16 07:00:59.608166 | orchestrator | # UPGRADE MANAGER 2026-04-16 07:00:59.608185 | orchestrator | 2026-04-16 07:00:59.608195 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-04-16 07:00:59.608206 | orchestrator | + echo 2026-04-16 07:00:59.608218 | orchestrator | + echo '# UPGRADE MANAGER' 2026-04-16 07:00:59.608227 | orchestrator | + echo 2026-04-16 07:00:59.608236 | orchestrator | + export MANAGER_VERSION=10.0.0 2026-04-16 07:00:59.608245 | orchestrator | + MANAGER_VERSION=10.0.0 2026-04-16 07:00:59.608254 | orchestrator | + CEPH_VERSION=reef 2026-04-16 07:00:59.608263 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-04-16 07:00:59.608272 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-04-16 07:00:59.608281 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0 2026-04-16 07:00:59.613536 | orchestrator | + set -e 2026-04-16 07:00:59.613602 | orchestrator | + VERSION=10.0.0 2026-04-16 07:00:59.613614 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0/g' /opt/configuration/environments/manager/configuration.yml 2026-04-16 07:00:59.619867 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-16 07:00:59.619960 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-16 07:00:59.623299 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-04-16 07:00:59.627948 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-16 07:00:59.636326 | orchestrator | /opt/configuration ~ 2026-04-16 07:00:59.636403 | orchestrator | + set -e 2026-04-16 07:00:59.636415 | orchestrator | + pushd /opt/configuration 2026-04-16 07:00:59.636426 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 07:00:59.636438 | orchestrator | + source /opt/venv/bin/activate 2026-04-16 07:00:59.637568 | orchestrator | ++ deactivate nondestructive 2026-04-16 07:00:59.637593 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:00:59.637604 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:00:59.637627 | orchestrator | ++ hash -r 2026-04-16 07:00:59.637638 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:00:59.637648 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-16 07:00:59.637672 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-16 07:00:59.637683 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-16 07:00:59.637693 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-16 07:00:59.637703 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-16 07:00:59.637713 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-16 07:00:59.637767 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-16 07:00:59.637780 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 07:00:59.637791 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 07:00:59.637802 | orchestrator | ++ export PATH 2026-04-16 07:00:59.637999 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:00:59.638056 | orchestrator | ++ '[' -z '' ']' 2026-04-16 07:00:59.638067 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-16 07:00:59.638076 | orchestrator | ++ PS1='(venv) ' 2026-04-16 07:00:59.638084 | orchestrator | ++ export PS1 2026-04-16 07:00:59.638092 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-16 07:00:59.638100 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-16 07:00:59.638108 | orchestrator | ++ hash -r 2026-04-16 07:00:59.638124 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-16 07:01:00.514718 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-16 07:01:00.514793 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-16 07:01:00.515996 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-16 07:01:00.517407 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-16 07:01:00.518622 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-16 07:01:00.528766 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-16 07:01:00.530144 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-16 07:01:00.531238 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-16 07:01:00.532608 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-16 07:01:00.564063 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-16 07:01:00.565387 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-16 07:01:00.567117 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-16 07:01:00.568714 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-16 07:01:00.572607 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-16 07:01:00.778185 | orchestrator | ++ which gilt 2026-04-16 07:01:00.782287 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-16 07:01:00.782352 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-16 07:01:00.991442 | orchestrator | osism.cfg-generics: 2026-04-16 07:01:01.083416 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-16 07:01:01.084408 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-16 07:01:01.086062 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-16 07:01:01.086101 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-16 07:01:02.015749 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-16 07:01:02.029350 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-16 07:01:02.358151 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-16 07:01:02.410268 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 07:01:02.410414 | orchestrator | + deactivate 2026-04-16 07:01:02.410428 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-16 07:01:02.410438 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 07:01:02.410445 | orchestrator | + export PATH 2026-04-16 07:01:02.410452 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-16 07:01:02.410459 | orchestrator | + '[' -n '' ']' 2026-04-16 07:01:02.410466 | orchestrator | + hash -r 2026-04-16 07:01:02.410483 | orchestrator | + '[' -n '' ']' 2026-04-16 07:01:02.410490 | orchestrator | + unset VIRTUAL_ENV 2026-04-16 07:01:02.410496 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-16 07:01:02.410502 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-16 07:01:02.410508 | orchestrator | + unset -f deactivate 2026-04-16 07:01:02.410514 | orchestrator | ~ 2026-04-16 07:01:02.410521 | orchestrator | + popd 2026-04-16 07:01:02.412995 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-16 07:01:02.413099 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-04-16 07:01:02.417512 | orchestrator | + set -e 2026-04-16 07:01:02.417559 | orchestrator | + NAMESPACE=kolla/release 2026-04-16 07:01:02.417566 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-16 07:01:02.425278 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-04-16 07:01:02.430216 | orchestrator | /opt/configuration ~ 2026-04-16 07:01:02.430279 | orchestrator | + set -e 2026-04-16 07:01:02.430285 | orchestrator | + pushd /opt/configuration 2026-04-16 07:01:02.430290 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 07:01:02.430294 | orchestrator | + source /opt/venv/bin/activate 2026-04-16 07:01:02.430299 | orchestrator | ++ deactivate nondestructive 2026-04-16 07:01:02.430303 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:01:02.430307 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:01:02.430311 | orchestrator | ++ hash -r 2026-04-16 07:01:02.430321 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:01:02.430325 | orchestrator | ++ unset VIRTUAL_ENV 2026-04-16 07:01:02.430329 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-04-16 07:01:02.430333 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-04-16 07:01:02.430338 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-04-16 07:01:02.430342 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-04-16 07:01:02.430345 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-04-16 07:01:02.430352 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-04-16 07:01:02.430399 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 07:01:02.430405 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 07:01:02.430409 | orchestrator | ++ export PATH 2026-04-16 07:01:02.430413 | orchestrator | ++ '[' -n '' ']' 2026-04-16 07:01:02.430417 | orchestrator | ++ '[' -z '' ']' 2026-04-16 07:01:02.430421 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-04-16 07:01:02.430425 | orchestrator | ++ PS1='(venv) ' 2026-04-16 07:01:02.430429 | orchestrator | ++ export PS1 2026-04-16 07:01:02.430433 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-04-16 07:01:02.430437 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-04-16 07:01:02.430441 | orchestrator | ++ hash -r 2026-04-16 07:01:02.430445 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-04-16 07:01:02.896799 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-04-16 07:01:02.897656 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.33.1) 2026-04-16 07:01:02.899048 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-04-16 07:01:02.900495 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-04-16 07:01:02.901665 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.1) 2026-04-16 07:01:02.911500 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.2) 2026-04-16 07:01:02.913123 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-04-16 07:01:02.914128 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-04-16 07:01:02.915601 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-04-16 07:01:02.947508 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.7) 2026-04-16 07:01:02.949058 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-04-16 07:01:02.950775 | orchestrator | Requirement already satisfied: urllib3<3,>=1.26 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-04-16 07:01:02.952103 | orchestrator | Requirement already satisfied: certifi>=2023.5.7 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-04-16 07:01:02.955829 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-04-16 07:01:03.201526 | orchestrator | ++ which gilt 2026-04-16 07:01:03.203099 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-04-16 07:01:03.203157 | orchestrator | + /opt/venv/bin/gilt overlay 2026-04-16 07:01:03.371231 | orchestrator | osism.cfg-generics: 2026-04-16 07:01:03.464784 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-04-16 07:01:03.464910 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-04-16 07:01:03.465023 | orchestrator | - copied (v0.20260319.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-04-16 07:01:03.466328 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-04-16 07:01:04.227722 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-04-16 07:01:04.238270 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-04-16 07:01:04.576169 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-04-16 07:01:04.632187 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-04-16 07:01:04.632298 | orchestrator | + deactivate 2026-04-16 07:01:04.632311 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-04-16 07:01:04.632320 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-04-16 07:01:04.632327 | orchestrator | + export PATH 2026-04-16 07:01:04.632333 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-04-16 07:01:04.632341 | orchestrator | + '[' -n '' ']' 2026-04-16 07:01:04.632347 | orchestrator | + hash -r 2026-04-16 07:01:04.632461 | orchestrator | ~ 2026-04-16 07:01:04.632473 | orchestrator | + '[' -n '' ']' 2026-04-16 07:01:04.632480 | orchestrator | + unset VIRTUAL_ENV 2026-04-16 07:01:04.632487 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-04-16 07:01:04.632493 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-04-16 07:01:04.632500 | orchestrator | + unset -f deactivate 2026-04-16 07:01:04.632507 | orchestrator | + popd 2026-04-16 07:01:04.634689 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-04-16 07:01:04.686327 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 07:01:04.686483 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-16 07:01:04.686962 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-16 07:01:04.766193 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 07:01:04.766268 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-16 07:01:04.776907 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-04-16 07:01:04.783272 | orchestrator | ++ semver v0.20251130.0 9.5.0 2026-04-16 07:01:04.846282 | orchestrator | + [[ -1 -le 0 ]] 2026-04-16 07:01:04.846348 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-16 07:01:04.846916 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-16 07:01:04.911227 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 07:01:04.911330 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-04-16 07:01:04.912981 | orchestrator | +++ semver 2024.2 2024.2 2026-04-16 07:01:04.989696 | orchestrator | ++ '[' 0 -le 0 ']' 2026-04-16 07:01:04.989991 | orchestrator | +++ semver 2024.2 2025.1 2026-04-16 07:01:05.049432 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-04-16 07:01:05.049533 | orchestrator | ++ echo false 2026-04-16 07:01:05.049550 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-04-16 07:01:05.049566 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-16 07:01:05.049578 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-04-16 07:01:05.049589 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-04-16 07:01:05.049600 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-04-16 07:01:05.053454 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-04-16 07:01:05.053517 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-04-16 07:01:05.070148 | orchestrator | export RABBITMQ3TO4=true 2026-04-16 07:01:05.072570 | orchestrator | + osism update manager 2026-04-16 07:01:10.566883 | orchestrator | Collecting uv 2026-04-16 07:01:10.661616 | orchestrator | Downloading uv-0.11.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-04-16 07:01:10.681456 | orchestrator | Downloading uv-0.11.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (24.9 MB) 2026-04-16 07:01:11.505934 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.9/24.9 MB 34.4 MB/s eta 0:00:00 2026-04-16 07:01:11.587846 | orchestrator | Installing collected packages: uv 2026-04-16 07:01:12.087647 | orchestrator | Successfully installed uv-0.11.7 2026-04-16 07:01:12.764709 | orchestrator | Resolved 11 packages in 400ms 2026-04-16 07:01:12.779102 | orchestrator | Downloading cryptography (4.3MiB) 2026-04-16 07:01:12.801031 | orchestrator | Downloading netaddr (2.2MiB) 2026-04-16 07:01:12.801164 | orchestrator | Downloading ansible (54.5MiB) 2026-04-16 07:01:12.801245 | orchestrator | Downloading ansible-core (2.1MiB) 2026-04-16 07:01:13.131478 | orchestrator | Downloaded netaddr 2026-04-16 07:01:13.242238 | orchestrator | Downloaded ansible-core 2026-04-16 07:01:13.262335 | orchestrator | Downloaded cryptography 2026-04-16 07:01:19.139529 | orchestrator | Downloaded ansible 2026-04-16 07:01:19.140045 | orchestrator | Prepared 11 packages in 6.37s 2026-04-16 07:01:19.683118 | orchestrator | Installed 11 packages in 541ms 2026-04-16 07:01:19.683310 | orchestrator | + ansible==11.11.0 2026-04-16 07:01:19.683337 | orchestrator | + ansible-core==2.18.15 2026-04-16 07:01:19.683350 | orchestrator | + cffi==2.0.0 2026-04-16 07:01:19.683362 | orchestrator | + cryptography==46.0.7 2026-04-16 07:01:19.683460 | orchestrator | + jinja2==3.1.6 2026-04-16 07:01:19.683473 | orchestrator | + markupsafe==3.0.3 2026-04-16 07:01:19.683484 | orchestrator | + netaddr==1.3.0 2026-04-16 07:01:19.683494 | orchestrator | + packaging==26.1 2026-04-16 07:01:19.683508 | orchestrator | + pycparser==3.0 2026-04-16 07:01:19.683519 | orchestrator | + pyyaml==6.0.3 2026-04-16 07:01:19.683640 | orchestrator | + resolvelib==1.0.1 2026-04-16 07:01:20.855446 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-194786y3ryaati/tmp_bwrff3x/ansible-collection-serviceso8dhfpe0'... 2026-04-16 07:01:22.306493 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-16 07:01:22.306596 | orchestrator | Already on 'main' 2026-04-16 07:01:22.763225 | orchestrator | Starting galaxy collection install process 2026-04-16 07:01:22.763331 | orchestrator | Process install dependency map 2026-04-16 07:01:22.763348 | orchestrator | Starting collection install process 2026-04-16 07:01:22.763361 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-04-16 07:01:22.763447 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-04-16 07:01:22.763460 | orchestrator | osism.services:999.0.0 was installed successfully 2026-04-16 07:01:23.316909 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-1950142yx931a4/tmp0lfe6ftx/ansible-playbooks-managerd3f9gj3j'... 2026-04-16 07:01:24.144876 | orchestrator | Your branch is up to date with 'origin/main'. 2026-04-16 07:01:24.144963 | orchestrator | Already on 'main' 2026-04-16 07:01:24.408124 | orchestrator | Starting galaxy collection install process 2026-04-16 07:01:24.408224 | orchestrator | Process install dependency map 2026-04-16 07:01:24.408242 | orchestrator | Starting collection install process 2026-04-16 07:01:24.408256 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-04-16 07:01:24.408269 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-04-16 07:01:24.408281 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-04-16 07:01:25.002436 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-04-16 07:01:25.002561 | orchestrator | -vvvv to see details 2026-04-16 07:01:25.418684 | orchestrator | 2026-04-16 07:01:25.418788 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-04-16 07:01:25.418805 | orchestrator | 2026-04-16 07:01:25.418840 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-04-16 07:01:29.380833 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:29.380936 | orchestrator | 2026-04-16 07:01:29.380951 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-04-16 07:01:29.438295 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 07:01:29.438465 | orchestrator | 2026-04-16 07:01:29.438485 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-04-16 07:01:31.137908 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:31.138010 | orchestrator | 2026-04-16 07:01:31.138103 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-04-16 07:01:31.191669 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:31.191760 | orchestrator | 2026-04-16 07:01:31.191772 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-04-16 07:01:31.278905 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-04-16 07:01:31.279003 | orchestrator | 2026-04-16 07:01:31.279016 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-04-16 07:01:35.369514 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-04-16 07:01:35.369635 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-04-16 07:01:35.369653 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-04-16 07:01:35.369679 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-04-16 07:01:35.369690 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-04-16 07:01:35.369701 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-04-16 07:01:35.369712 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-04-16 07:01:35.369729 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-04-16 07:01:35.369747 | orchestrator | 2026-04-16 07:01:35.369768 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-04-16 07:01:36.394908 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:36.395010 | orchestrator | 2026-04-16 07:01:36.395028 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-04-16 07:01:37.302639 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:37.302720 | orchestrator | 2026-04-16 07:01:37.302730 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-04-16 07:01:37.393155 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-04-16 07:01:37.393263 | orchestrator | 2026-04-16 07:01:37.393282 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-04-16 07:01:39.227179 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-04-16 07:01:39.227265 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-04-16 07:01:39.227275 | orchestrator | 2026-04-16 07:01:39.227284 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-04-16 07:01:40.169168 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:40.169273 | orchestrator | 2026-04-16 07:01:40.169289 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-04-16 07:01:40.238793 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:01:40.238900 | orchestrator | 2026-04-16 07:01:40.238917 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-04-16 07:01:40.328190 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-04-16 07:01:40.328290 | orchestrator | 2026-04-16 07:01:40.328306 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-04-16 07:01:41.217332 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:41.217460 | orchestrator | 2026-04-16 07:01:41.217474 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-04-16 07:01:41.284113 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-04-16 07:01:41.284205 | orchestrator | 2026-04-16 07:01:41.284219 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-04-16 07:01:43.098670 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-16 07:01:43.098743 | orchestrator | ok: [testbed-manager] => (item=None) 2026-04-16 07:01:43.098751 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:43.098758 | orchestrator | 2026-04-16 07:01:43.098774 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-04-16 07:01:44.041240 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:44.041326 | orchestrator | 2026-04-16 07:01:44.041337 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-04-16 07:01:44.106756 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:01:44.106845 | orchestrator | 2026-04-16 07:01:44.106856 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-04-16 07:01:44.204901 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-04-16 07:01:44.205000 | orchestrator | 2026-04-16 07:01:44.205015 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-04-16 07:01:44.859734 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:44.859818 | orchestrator | 2026-04-16 07:01:44.859829 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-04-16 07:01:45.400201 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:45.400272 | orchestrator | 2026-04-16 07:01:45.400279 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-04-16 07:01:47.189578 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-04-16 07:01:47.189667 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-04-16 07:01:47.189678 | orchestrator | 2026-04-16 07:01:47.189687 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-04-16 07:01:48.340822 | orchestrator | changed: [testbed-manager] 2026-04-16 07:01:48.340924 | orchestrator | 2026-04-16 07:01:48.340938 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-04-16 07:01:48.874913 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:48.875019 | orchestrator | 2026-04-16 07:01:48.875035 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-04-16 07:01:49.396009 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:49.396127 | orchestrator | 2026-04-16 07:01:49.396153 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-04-16 07:01:49.450426 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:01:49.450530 | orchestrator | 2026-04-16 07:01:49.450558 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-04-16 07:01:49.551774 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-04-16 07:01:49.551900 | orchestrator | 2026-04-16 07:01:49.551916 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-04-16 07:01:49.615045 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:49.615121 | orchestrator | 2026-04-16 07:01:49.615130 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-04-16 07:01:52.400896 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-04-16 07:01:52.400990 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-04-16 07:01:52.400998 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-04-16 07:01:52.401002 | orchestrator | 2026-04-16 07:01:52.401007 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-04-16 07:01:53.430330 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:53.430531 | orchestrator | 2026-04-16 07:01:53.430552 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-04-16 07:01:54.427949 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:54.428085 | orchestrator | 2026-04-16 07:01:54.428113 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-04-16 07:01:55.399958 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:55.400051 | orchestrator | 2026-04-16 07:01:55.400062 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-04-16 07:01:55.489301 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-04-16 07:01:55.489458 | orchestrator | 2026-04-16 07:01:55.489477 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-04-16 07:01:55.559980 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:55.560082 | orchestrator | 2026-04-16 07:01:55.560098 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-04-16 07:01:56.505847 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-04-16 07:01:56.505916 | orchestrator | 2026-04-16 07:01:56.505923 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-04-16 07:01:56.590320 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-04-16 07:01:56.590541 | orchestrator | 2026-04-16 07:01:56.590571 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-04-16 07:01:57.550350 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:57.550534 | orchestrator | 2026-04-16 07:01:57.550552 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-04-16 07:01:58.601596 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:58.601694 | orchestrator | 2026-04-16 07:01:58.601707 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-04-16 07:01:58.683750 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:01:58.683852 | orchestrator | 2026-04-16 07:01:58.683868 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-04-16 07:01:58.754290 | orchestrator | ok: [testbed-manager] 2026-04-16 07:01:58.754463 | orchestrator | 2026-04-16 07:01:58.754492 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-04-16 07:02:01.101923 | orchestrator | changed: [testbed-manager] 2026-04-16 07:02:01.102080 | orchestrator | 2026-04-16 07:02:01.102100 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-04-16 07:03:06.745715 | orchestrator | changed: [testbed-manager] 2026-04-16 07:03:06.745833 | orchestrator | 2026-04-16 07:03:06.745853 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-04-16 07:03:07.860631 | orchestrator | ok: [testbed-manager] 2026-04-16 07:03:07.860757 | orchestrator | 2026-04-16 07:03:07.860776 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-04-16 07:03:07.921767 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:03:07.921873 | orchestrator | 2026-04-16 07:03:07.921888 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-04-16 07:03:08.703540 | orchestrator | ok: [testbed-manager] 2026-04-16 07:03:08.703641 | orchestrator | 2026-04-16 07:03:08.703662 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-04-16 07:03:08.778640 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:03:08.778730 | orchestrator | 2026-04-16 07:03:08.778740 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-16 07:03:08.778749 | orchestrator | 2026-04-16 07:03:08.778756 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-04-16 07:03:23.394216 | orchestrator | changed: [testbed-manager] 2026-04-16 07:03:23.394315 | orchestrator | 2026-04-16 07:03:23.394328 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-04-16 07:04:23.451847 | orchestrator | Pausing for 60 seconds 2026-04-16 07:04:23.451961 | orchestrator | changed: [testbed-manager] 2026-04-16 07:04:23.451976 | orchestrator | 2026-04-16 07:04:23.451988 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-04-16 07:04:23.511065 | orchestrator | ok: [testbed-manager] 2026-04-16 07:04:23.511192 | orchestrator | 2026-04-16 07:04:23.511212 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-04-16 07:04:26.974672 | orchestrator | changed: [testbed-manager] 2026-04-16 07:04:26.974790 | orchestrator | 2026-04-16 07:04:26.974818 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-04-16 07:05:29.604177 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-04-16 07:05:29.604275 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-04-16 07:05:29.604287 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-04-16 07:05:29.604296 | orchestrator | changed: [testbed-manager] 2026-04-16 07:05:29.604306 | orchestrator | 2026-04-16 07:05:29.604314 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-04-16 07:05:35.371302 | orchestrator | changed: [testbed-manager] 2026-04-16 07:05:35.371443 | orchestrator | 2026-04-16 07:05:35.371531 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-04-16 07:05:35.456023 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-04-16 07:05:35.456160 | orchestrator | 2026-04-16 07:05:35.456188 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-04-16 07:05:35.456208 | orchestrator | 2026-04-16 07:05:35.456284 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-04-16 07:05:35.524902 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:05:35.525001 | orchestrator | 2026-04-16 07:05:35.525016 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-04-16 07:05:35.599622 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-04-16 07:05:35.599715 | orchestrator | 2026-04-16 07:05:35.599729 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-04-16 07:05:36.704979 | orchestrator | changed: [testbed-manager] 2026-04-16 07:05:36.705082 | orchestrator | 2026-04-16 07:05:36.705099 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-04-16 07:05:40.012618 | orchestrator | ok: [testbed-manager] 2026-04-16 07:05:40.012704 | orchestrator | 2026-04-16 07:05:40.012715 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-04-16 07:05:40.098576 | orchestrator | ok: [testbed-manager] => { 2026-04-16 07:05:40.098678 | orchestrator | "version_check_result.stdout_lines": [ 2026-04-16 07:05:40.098694 | orchestrator | "=== OSISM Container Version Check ===", 2026-04-16 07:05:40.098705 | orchestrator | "Checking running containers against expected versions...", 2026-04-16 07:05:40.098718 | orchestrator | "", 2026-04-16 07:05:40.098730 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-04-16 07:05:40.098741 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-16 07:05:40.098752 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.098763 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20260322.0", 2026-04-16 07:05:40.098774 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.098785 | orchestrator | "", 2026-04-16 07:05:40.098796 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-04-16 07:05:40.098807 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-16 07:05:40.098818 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.098828 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20260322.0", 2026-04-16 07:05:40.098839 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.098850 | orchestrator | "", 2026-04-16 07:05:40.098861 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-04-16 07:05:40.098871 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-16 07:05:40.098882 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.098893 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20260322.0", 2026-04-16 07:05:40.098903 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.098914 | orchestrator | "", 2026-04-16 07:05:40.098925 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-04-16 07:05:40.098936 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-16 07:05:40.098946 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.098957 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20260322.0", 2026-04-16 07:05:40.098968 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.098979 | orchestrator | "", 2026-04-16 07:05:40.098990 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-04-16 07:05:40.099001 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-16 07:05:40.099011 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099025 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20260328.0", 2026-04-16 07:05:40.099037 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099049 | orchestrator | "", 2026-04-16 07:05:40.099062 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-04-16 07:05:40.099098 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099121 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099134 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099149 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099161 | orchestrator | "", 2026-04-16 07:05:40.099174 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-04-16 07:05:40.099186 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-16 07:05:40.099200 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099213 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-04-16 07:05:40.099226 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099237 | orchestrator | "", 2026-04-16 07:05:40.099248 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-04-16 07:05:40.099259 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-16 07:05:40.099270 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099281 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-04-16 07:05:40.099292 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099302 | orchestrator | "", 2026-04-16 07:05:40.099313 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-04-16 07:05:40.099324 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-16 07:05:40.099335 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099346 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20260320.0", 2026-04-16 07:05:40.099356 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099372 | orchestrator | "", 2026-04-16 07:05:40.099383 | orchestrator | "Checking service: redis (Redis Cache)", 2026-04-16 07:05:40.099394 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-16 07:05:40.099405 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099416 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-04-16 07:05:40.099427 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099438 | orchestrator | "", 2026-04-16 07:05:40.099449 | orchestrator | "Checking service: api (OSISM API Service)", 2026-04-16 07:05:40.099485 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099497 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099507 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099518 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099529 | orchestrator | "", 2026-04-16 07:05:40.099540 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-04-16 07:05:40.099551 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099562 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099573 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099583 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099594 | orchestrator | "", 2026-04-16 07:05:40.099605 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-04-16 07:05:40.099616 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099627 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099637 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099648 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099659 | orchestrator | "", 2026-04-16 07:05:40.099670 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-04-16 07:05:40.099681 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099691 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099703 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099730 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099742 | orchestrator | "", 2026-04-16 07:05:40.099753 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-04-16 07:05:40.099764 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099782 | orchestrator | " Enabled: true", 2026-04-16 07:05:40.099793 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20260320.0", 2026-04-16 07:05:40.099803 | orchestrator | " Status: ✅ MATCH", 2026-04-16 07:05:40.099814 | orchestrator | "", 2026-04-16 07:05:40.099825 | orchestrator | "=== Summary ===", 2026-04-16 07:05:40.099836 | orchestrator | "Errors (version mismatches): 0", 2026-04-16 07:05:40.099847 | orchestrator | "Warnings (expected containers not running): 0", 2026-04-16 07:05:40.099857 | orchestrator | "", 2026-04-16 07:05:40.099868 | orchestrator | "✅ All running containers match expected versions!" 2026-04-16 07:05:40.099879 | orchestrator | ] 2026-04-16 07:05:40.099890 | orchestrator | } 2026-04-16 07:05:40.099901 | orchestrator | 2026-04-16 07:05:40.099912 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-04-16 07:05:40.166639 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:05:40.166741 | orchestrator | 2026-04-16 07:05:40.166756 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:05:40.166769 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-04-16 07:05:40.166781 | orchestrator | 2026-04-16 07:05:52.911778 | orchestrator | 2026-04-16 07:05:52 | INFO  | Task 06957bca-cd84-43d3-b5f7-8390d5bfb4c2 (sync inventory) is running in background. Output coming soon. 2026-04-16 07:06:21.030184 | orchestrator | 2026-04-16 07:05:54 | INFO  | Starting group_vars file reorganization 2026-04-16 07:06:21.030344 | orchestrator | 2026-04-16 07:05:54 | INFO  | Moved 0 file(s) to their respective directories 2026-04-16 07:06:21.030364 | orchestrator | 2026-04-16 07:05:54 | INFO  | Group_vars file reorganization completed 2026-04-16 07:06:21.030378 | orchestrator | 2026-04-16 07:05:57 | INFO  | Starting variable preparation from inventory 2026-04-16 07:06:21.030392 | orchestrator | 2026-04-16 07:05:59 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-04-16 07:06:21.030406 | orchestrator | 2026-04-16 07:05:59 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-04-16 07:06:21.030419 | orchestrator | 2026-04-16 07:05:59 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-04-16 07:06:21.030432 | orchestrator | 2026-04-16 07:05:59 | INFO  | 3 file(s) written, 6 host(s) processed 2026-04-16 07:06:21.030446 | orchestrator | 2026-04-16 07:05:59 | INFO  | Variable preparation completed 2026-04-16 07:06:21.030459 | orchestrator | 2026-04-16 07:06:01 | INFO  | Starting inventory overwrite handling 2026-04-16 07:06:21.030526 | orchestrator | 2026-04-16 07:06:01 | INFO  | Handling group overwrites in 99-overwrite 2026-04-16 07:06:21.030541 | orchestrator | 2026-04-16 07:06:01 | INFO  | Removing group frr:children from 60-generic 2026-04-16 07:06:21.030554 | orchestrator | 2026-04-16 07:06:01 | INFO  | Removing group netbird:children from 50-infrastructure 2026-04-16 07:06:21.030567 | orchestrator | 2026-04-16 07:06:01 | INFO  | Removing group ceph-mds from 50-ceph 2026-04-16 07:06:21.030580 | orchestrator | 2026-04-16 07:06:01 | INFO  | Removing group ceph-rgw from 50-ceph 2026-04-16 07:06:21.030593 | orchestrator | 2026-04-16 07:06:01 | INFO  | Handling group overwrites in 20-roles 2026-04-16 07:06:21.030606 | orchestrator | 2026-04-16 07:06:01 | INFO  | Removing group k3s_node from 50-infrastructure 2026-04-16 07:06:21.030620 | orchestrator | 2026-04-16 07:06:01 | INFO  | Removed 5 group(s) in total 2026-04-16 07:06:21.030633 | orchestrator | 2026-04-16 07:06:01 | INFO  | Inventory overwrite handling completed 2026-04-16 07:06:21.030646 | orchestrator | 2026-04-16 07:06:02 | INFO  | Starting merge of inventory files 2026-04-16 07:06:21.030660 | orchestrator | 2026-04-16 07:06:02 | INFO  | Inventory files merged successfully 2026-04-16 07:06:21.030707 | orchestrator | 2026-04-16 07:06:07 | INFO  | Generating minified hosts file 2026-04-16 07:06:21.030721 | orchestrator | 2026-04-16 07:06:08 | INFO  | Successfully wrote minified hosts file to /inventory.merge/hosts-minified.yml 2026-04-16 07:06:21.030752 | orchestrator | 2026-04-16 07:06:08 | INFO  | Successfully wrote fast inventory to /inventory.merge/fast/hosts.json 2026-04-16 07:06:21.030766 | orchestrator | 2026-04-16 07:06:09 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-04-16 07:06:21.030779 | orchestrator | 2026-04-16 07:06:19 | INFO  | Successfully wrote ClusterShell configuration 2026-04-16 07:06:21.251139 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 07:06:21.251264 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-04-16 07:06:21.251278 | orchestrator | + local max_attempts=60 2026-04-16 07:06:21.251292 | orchestrator | + local name=kolla-ansible 2026-04-16 07:06:21.251303 | orchestrator | + local attempt_num=1 2026-04-16 07:06:21.251341 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-04-16 07:06:21.284152 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 07:06:21.284290 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-04-16 07:06:21.284314 | orchestrator | + local max_attempts=60 2026-04-16 07:06:21.284333 | orchestrator | + local name=osism-ansible 2026-04-16 07:06:21.284351 | orchestrator | + local attempt_num=1 2026-04-16 07:06:21.284762 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-04-16 07:06:21.317802 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-04-16 07:06:21.317900 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-04-16 07:06:21.471172 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-04-16 07:06:21.471275 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471291 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471303 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-04-16 07:06:21.471335 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-04-16 07:06:21.471347 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471358 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471369 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-04-16 07:06:21.471380 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" listener 2 minutes ago Restarting (0) 42 seconds ago 2026-04-16 07:06:21.471391 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 2 minutes (healthy) 3306/tcp 2026-04-16 07:06:21.471401 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471436 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 2 minutes (healthy) 6379/tcp 2026-04-16 07:06:21.471448 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471459 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-04-16 07:06:21.471525 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.471537 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-04-16 07:06:21.476321 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-04-16 07:06:21.476379 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-04-16 07:06:21.476393 | orchestrator | + osism apply facts 2026-04-16 07:06:32.819672 | orchestrator | 2026-04-16 07:06:32 | INFO  | Prepare task for execution of facts. 2026-04-16 07:06:32.893852 | orchestrator | 2026-04-16 07:06:32 | INFO  | Task 0bfb1850-d4f5-40a4-a81b-c0a6b47d7f51 (facts) was prepared for execution. 2026-04-16 07:06:32.893951 | orchestrator | 2026-04-16 07:06:32 | INFO  | It takes a moment until task 0bfb1850-d4f5-40a4-a81b-c0a6b47d7f51 (facts) has been started and output is visible here. 2026-04-16 07:06:56.798563 | orchestrator | 2026-04-16 07:06:56.798654 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-16 07:06:56.798664 | orchestrator | 2026-04-16 07:06:56.798670 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-16 07:06:56.798676 | orchestrator | Thursday 16 April 2026 07:06:38 +0000 (0:00:02.021) 0:00:02.021 ******** 2026-04-16 07:06:56.798681 | orchestrator | ok: [testbed-manager] 2026-04-16 07:06:56.798688 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:06:56.798694 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:06:56.798699 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:06:56.798704 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:06:56.798709 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:06:56.798714 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:06:56.798719 | orchestrator | 2026-04-16 07:06:56.798725 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-16 07:06:56.798730 | orchestrator | Thursday 16 April 2026 07:06:42 +0000 (0:00:04.012) 0:00:06.034 ******** 2026-04-16 07:06:56.798735 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:06:56.798741 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:06:56.798746 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:06:56.798752 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:06:56.798757 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:06:56.798761 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:06:56.798767 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:06:56.798772 | orchestrator | 2026-04-16 07:06:56.798777 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 07:06:56.798836 | orchestrator | 2026-04-16 07:06:56.798845 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 07:06:56.798850 | orchestrator | Thursday 16 April 2026 07:06:45 +0000 (0:00:02.848) 0:00:08.883 ******** 2026-04-16 07:06:56.798855 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:06:56.798861 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:06:56.798866 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:06:56.798871 | orchestrator | ok: [testbed-manager] 2026-04-16 07:06:56.798876 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:06:56.798902 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:06:56.798907 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:06:56.798912 | orchestrator | 2026-04-16 07:06:56.798918 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-16 07:06:56.798923 | orchestrator | 2026-04-16 07:06:56.798928 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-16 07:06:56.798934 | orchestrator | Thursday 16 April 2026 07:06:53 +0000 (0:00:08.510) 0:00:17.394 ******** 2026-04-16 07:06:56.798939 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:06:56.798944 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:06:56.798949 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:06:56.798954 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:06:56.798959 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:06:56.798964 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:06:56.798969 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:06:56.798974 | orchestrator | 2026-04-16 07:06:56.798979 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:06:56.798985 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.798991 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.798996 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.799001 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.799007 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.799012 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.799017 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 07:06:56.799022 | orchestrator | 2026-04-16 07:06:56.799027 | orchestrator | 2026-04-16 07:06:56.799032 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:06:56.799037 | orchestrator | Thursday 16 April 2026 07:06:56 +0000 (0:00:02.802) 0:00:20.196 ******** 2026-04-16 07:06:56.799042 | orchestrator | =============================================================================== 2026-04-16 07:06:56.799047 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.51s 2026-04-16 07:06:56.799053 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 4.01s 2026-04-16 07:06:56.799058 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 2.85s 2026-04-16 07:06:56.799071 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.80s 2026-04-16 07:06:56.945151 | orchestrator | + [[ 10.0.0 != \l\a\t\e\s\t ]] 2026-04-16 07:06:56.945320 | orchestrator | ++ semver 10.0.0 10.0.0-0 2026-04-16 07:06:56.998447 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 07:06:56.998823 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-04-16 07:06:57.024125 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-04-16 07:06:57.024230 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-04-16 07:06:57.028788 | orchestrator | + set -e 2026-04-16 07:06:57.028851 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-04-16 07:06:57.028866 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-04-16 07:06:57.035377 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-04-16 07:06:57.043101 | orchestrator | 2026-04-16 07:06:57.043200 | orchestrator | # UPGRADE SERVICES 2026-04-16 07:06:57.043240 | orchestrator | 2026-04-16 07:06:57.043251 | orchestrator | + set -e 2026-04-16 07:06:57.043262 | orchestrator | + echo 2026-04-16 07:06:57.043272 | orchestrator | + echo '# UPGRADE SERVICES' 2026-04-16 07:06:57.043282 | orchestrator | + echo 2026-04-16 07:06:57.043292 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 07:06:57.044194 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 07:06:57.044297 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 07:06:57.044319 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 07:06:57.044338 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 07:06:57.044357 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 07:06:57.044376 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 07:06:57.044394 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 07:06:57.044412 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 07:06:57.044431 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 07:06:57.044450 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 07:06:57.044467 | orchestrator | ++ export ARA=false 2026-04-16 07:06:57.044514 | orchestrator | ++ ARA=false 2026-04-16 07:06:57.044531 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 07:06:57.044548 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 07:06:57.044565 | orchestrator | ++ export TEMPEST=false 2026-04-16 07:06:57.044581 | orchestrator | ++ TEMPEST=false 2026-04-16 07:06:57.044598 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 07:06:57.044616 | orchestrator | ++ IS_ZUUL=true 2026-04-16 07:06:57.044631 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 07:06:57.044647 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 07:06:57.044663 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 07:06:57.044677 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 07:06:57.044693 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 07:06:57.044730 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 07:06:57.044761 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 07:06:57.044778 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 07:06:57.044795 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 07:06:57.044811 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 07:06:57.044828 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-16 07:06:57.044844 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-16 07:06:57.044862 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-04-16 07:06:57.044878 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-04-16 07:06:57.044895 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-04-16 07:06:57.051218 | orchestrator | + set -e 2026-04-16 07:06:57.051316 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 07:06:57.051420 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 07:06:57.051435 | orchestrator | ++ INTERACTIVE=false 2026-04-16 07:06:57.051444 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 07:06:57.051452 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 07:06:57.051743 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 07:06:57.051774 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 07:06:57.051789 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 07:06:57.051802 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 07:06:57.051814 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 07:06:57.051822 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 07:06:57.051831 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 07:06:57.051839 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 07:06:57.051847 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 07:06:57.051855 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 07:06:57.051863 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 07:06:57.051871 | orchestrator | ++ export ARA=false 2026-04-16 07:06:57.051879 | orchestrator | ++ ARA=false 2026-04-16 07:06:57.051887 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 07:06:57.051895 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 07:06:57.051903 | orchestrator | ++ export TEMPEST=false 2026-04-16 07:06:57.051911 | orchestrator | ++ TEMPEST=false 2026-04-16 07:06:57.051919 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 07:06:57.051927 | orchestrator | ++ IS_ZUUL=true 2026-04-16 07:06:57.051935 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 07:06:57.051944 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 07:06:57.051952 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 07:06:57.051960 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 07:06:57.051967 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 07:06:57.051975 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 07:06:57.051983 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 07:06:57.052058 | orchestrator | 2026-04-16 07:06:57.052070 | orchestrator | # PULL IMAGES 2026-04-16 07:06:57.052078 | orchestrator | 2026-04-16 07:06:57.052086 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 07:06:57.052118 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 07:06:57.052126 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 07:06:57.052134 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-16 07:06:57.052142 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-16 07:06:57.052150 | orchestrator | + echo 2026-04-16 07:06:57.052158 | orchestrator | + echo '# PULL IMAGES' 2026-04-16 07:06:57.052166 | orchestrator | + echo 2026-04-16 07:06:57.053060 | orchestrator | ++ semver 9.5.0 7.0.0 2026-04-16 07:06:57.110560 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 07:06:57.110658 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-04-16 07:06:58.313973 | orchestrator | 2026-04-16 07:06:58 | INFO  | Trying to run play pull-images in environment custom 2026-04-16 07:07:08.388676 | orchestrator | 2026-04-16 07:07:08 | INFO  | Prepare task for execution of pull-images. 2026-04-16 07:07:08.475692 | orchestrator | 2026-04-16 07:07:08 | INFO  | Task 556c294b-52d1-429e-ab40-cd8d2b6e6a67 (pull-images) was prepared for execution. 2026-04-16 07:07:08.475807 | orchestrator | 2026-04-16 07:07:08 | INFO  | Task 556c294b-52d1-429e-ab40-cd8d2b6e6a67 is running in background. No more output. Check ARA for logs. 2026-04-16 07:07:08.691991 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-04-16 07:07:08.701821 | orchestrator | + set -e 2026-04-16 07:07:08.701931 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 07:07:08.701949 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 07:07:08.701962 | orchestrator | ++ INTERACTIVE=false 2026-04-16 07:07:08.701974 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 07:07:08.701985 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 07:07:08.701996 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 07:07:08.702821 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 07:07:08.711983 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-16 07:07:08.712059 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-16 07:07:08.712073 | orchestrator | ++ semver 10.0.0 8.0.3 2026-04-16 07:07:08.757868 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 07:07:08.757975 | orchestrator | + osism apply frr 2026-04-16 07:07:20.070124 | orchestrator | 2026-04-16 07:07:20 | INFO  | Prepare task for execution of frr. 2026-04-16 07:07:20.148908 | orchestrator | 2026-04-16 07:07:20 | INFO  | Task a3ccdd1e-8348-4f34-ae0c-06b9141b772b (frr) was prepared for execution. 2026-04-16 07:07:20.148999 | orchestrator | 2026-04-16 07:07:20 | INFO  | It takes a moment until task a3ccdd1e-8348-4f34-ae0c-06b9141b772b (frr) has been started and output is visible here. 2026-04-16 07:07:52.706605 | orchestrator | 2026-04-16 07:07:52.706716 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-04-16 07:07:52.706731 | orchestrator | 2026-04-16 07:07:52.706741 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-04-16 07:07:52.706750 | orchestrator | Thursday 16 April 2026 07:07:25 +0000 (0:00:02.770) 0:00:02.770 ******** 2026-04-16 07:07:52.706760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 07:07:52.706771 | orchestrator | 2026-04-16 07:07:52.706779 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-04-16 07:07:52.706788 | orchestrator | Thursday 16 April 2026 07:07:28 +0000 (0:00:02.661) 0:00:05.432 ******** 2026-04-16 07:07:52.706797 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.706808 | orchestrator | 2026-04-16 07:07:52.706817 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-04-16 07:07:52.706837 | orchestrator | Thursday 16 April 2026 07:07:30 +0000 (0:00:02.313) 0:00:07.745 ******** 2026-04-16 07:07:52.706846 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.706855 | orchestrator | 2026-04-16 07:07:52.706864 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-04-16 07:07:52.706873 | orchestrator | Thursday 16 April 2026 07:07:33 +0000 (0:00:02.348) 0:00:10.094 ******** 2026-04-16 07:07:52.706882 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.706891 | orchestrator | 2026-04-16 07:07:52.706900 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-04-16 07:07:52.706930 | orchestrator | Thursday 16 April 2026 07:07:34 +0000 (0:00:01.661) 0:00:11.756 ******** 2026-04-16 07:07:52.706940 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.706948 | orchestrator | 2026-04-16 07:07:52.706957 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-04-16 07:07:52.706966 | orchestrator | Thursday 16 April 2026 07:07:36 +0000 (0:00:01.844) 0:00:13.600 ******** 2026-04-16 07:07:52.706974 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.706983 | orchestrator | 2026-04-16 07:07:52.706996 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-04-16 07:07:52.707005 | orchestrator | Thursday 16 April 2026 07:07:39 +0000 (0:00:02.380) 0:00:15.981 ******** 2026-04-16 07:07:52.707028 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:07:52.707038 | orchestrator | 2026-04-16 07:07:52.707047 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-04-16 07:07:52.707065 | orchestrator | Thursday 16 April 2026 07:07:40 +0000 (0:00:01.132) 0:00:17.113 ******** 2026-04-16 07:07:52.707074 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:07:52.707083 | orchestrator | 2026-04-16 07:07:52.707092 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-04-16 07:07:52.707100 | orchestrator | Thursday 16 April 2026 07:07:41 +0000 (0:00:01.259) 0:00:18.372 ******** 2026-04-16 07:07:52.707109 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:07:52.707118 | orchestrator | 2026-04-16 07:07:52.707126 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-04-16 07:07:52.707136 | orchestrator | Thursday 16 April 2026 07:07:42 +0000 (0:00:01.154) 0:00:19.527 ******** 2026-04-16 07:07:52.707145 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:07:52.707154 | orchestrator | 2026-04-16 07:07:52.707163 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-04-16 07:07:52.707171 | orchestrator | Thursday 16 April 2026 07:07:43 +0000 (0:00:01.177) 0:00:20.704 ******** 2026-04-16 07:07:52.707180 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:07:52.707189 | orchestrator | 2026-04-16 07:07:52.707197 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-04-16 07:07:52.707206 | orchestrator | Thursday 16 April 2026 07:07:44 +0000 (0:00:01.117) 0:00:21.821 ******** 2026-04-16 07:07:52.707215 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.707223 | orchestrator | 2026-04-16 07:07:52.707232 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-04-16 07:07:52.707240 | orchestrator | Thursday 16 April 2026 07:07:46 +0000 (0:00:01.885) 0:00:23.707 ******** 2026-04-16 07:07:52.707249 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-04-16 07:07:52.707258 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-04-16 07:07:52.707268 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-04-16 07:07:52.707277 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-04-16 07:07:52.707285 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-04-16 07:07:52.707294 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-04-16 07:07:52.707303 | orchestrator | 2026-04-16 07:07:52.707311 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-04-16 07:07:52.707320 | orchestrator | Thursday 16 April 2026 07:07:49 +0000 (0:00:03.215) 0:00:26.922 ******** 2026-04-16 07:07:52.707329 | orchestrator | ok: [testbed-manager] 2026-04-16 07:07:52.707337 | orchestrator | 2026-04-16 07:07:52.707346 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:07:52.707355 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 07:07:52.707370 | orchestrator | 2026-04-16 07:07:52.707379 | orchestrator | 2026-04-16 07:07:52.707428 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:07:52.707437 | orchestrator | Thursday 16 April 2026 07:07:52 +0000 (0:00:02.466) 0:00:29.389 ******** 2026-04-16 07:07:52.707446 | orchestrator | =============================================================================== 2026-04-16 07:07:52.707473 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.22s 2026-04-16 07:07:52.707485 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.66s 2026-04-16 07:07:52.707519 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.47s 2026-04-16 07:07:52.707533 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.38s 2026-04-16 07:07:52.707548 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.35s 2026-04-16 07:07:52.707563 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.31s 2026-04-16 07:07:52.707577 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.89s 2026-04-16 07:07:52.707592 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.84s 2026-04-16 07:07:52.707605 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.66s 2026-04-16 07:07:52.707614 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 1.26s 2026-04-16 07:07:52.707623 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.18s 2026-04-16 07:07:52.707631 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 1.15s 2026-04-16 07:07:52.707640 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 1.13s 2026-04-16 07:07:52.707648 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.12s 2026-04-16 07:07:52.903164 | orchestrator | + osism apply kubernetes 2026-04-16 07:07:54.193787 | orchestrator | 2026-04-16 07:07:54 | INFO  | Prepare task for execution of kubernetes. 2026-04-16 07:07:54.262921 | orchestrator | 2026-04-16 07:07:54 | INFO  | Task d2480e05-c2eb-40d3-9aa7-f1191a9f7124 (kubernetes) was prepared for execution. 2026-04-16 07:07:54.263021 | orchestrator | 2026-04-16 07:07:54 | INFO  | It takes a moment until task d2480e05-c2eb-40d3-9aa7-f1191a9f7124 (kubernetes) has been started and output is visible here. 2026-04-16 07:08:36.347706 | orchestrator | 2026-04-16 07:08:36.347880 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-04-16 07:08:36.347913 | orchestrator | 2026-04-16 07:08:36.347935 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-04-16 07:08:36.347956 | orchestrator | Thursday 16 April 2026 07:07:59 +0000 (0:00:01.820) 0:00:01.820 ******** 2026-04-16 07:08:36.347977 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:08:36.347999 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:08:36.348034 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:08:36.348056 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:08:36.348077 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:08:36.348097 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:08:36.348117 | orchestrator | 2026-04-16 07:08:36.348137 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-04-16 07:08:36.348158 | orchestrator | Thursday 16 April 2026 07:08:04 +0000 (0:00:04.396) 0:00:06.217 ******** 2026-04-16 07:08:36.348180 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.348202 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.348223 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.348243 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.348264 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.348284 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.348306 | orchestrator | 2026-04-16 07:08:36.348327 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-04-16 07:08:36.348381 | orchestrator | Thursday 16 April 2026 07:08:05 +0000 (0:00:01.775) 0:00:07.992 ******** 2026-04-16 07:08:36.348403 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.348424 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.348443 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.348463 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.348484 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.348528 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.348547 | orchestrator | 2026-04-16 07:08:36.348566 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-04-16 07:08:36.348586 | orchestrator | Thursday 16 April 2026 07:08:07 +0000 (0:00:01.930) 0:00:09.923 ******** 2026-04-16 07:08:36.348606 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:08:36.348624 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:08:36.348642 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:08:36.348659 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:08:36.348677 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:08:36.348696 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:08:36.348715 | orchestrator | 2026-04-16 07:08:36.348735 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-04-16 07:08:36.348753 | orchestrator | Thursday 16 April 2026 07:08:10 +0000 (0:00:02.990) 0:00:12.913 ******** 2026-04-16 07:08:36.348771 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:08:36.348790 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:08:36.348806 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:08:36.348817 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:08:36.348828 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:08:36.348839 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:08:36.348850 | orchestrator | 2026-04-16 07:08:36.348860 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-04-16 07:08:36.348871 | orchestrator | Thursday 16 April 2026 07:08:13 +0000 (0:00:02.970) 0:00:15.883 ******** 2026-04-16 07:08:36.348882 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:08:36.348892 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:08:36.348903 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:08:36.348913 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:08:36.348924 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:08:36.348934 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:08:36.348945 | orchestrator | 2026-04-16 07:08:36.348956 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-04-16 07:08:36.348967 | orchestrator | Thursday 16 April 2026 07:08:16 +0000 (0:00:02.394) 0:00:18.278 ******** 2026-04-16 07:08:36.348977 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.348988 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.348999 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.349010 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.349020 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.349031 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.349041 | orchestrator | 2026-04-16 07:08:36.349052 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-04-16 07:08:36.349063 | orchestrator | Thursday 16 April 2026 07:08:17 +0000 (0:00:01.712) 0:00:19.990 ******** 2026-04-16 07:08:36.349074 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.349085 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.349095 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.349106 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.349117 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.349127 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.349138 | orchestrator | 2026-04-16 07:08:36.349149 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-04-16 07:08:36.349160 | orchestrator | Thursday 16 April 2026 07:08:19 +0000 (0:00:01.913) 0:00:21.904 ******** 2026-04-16 07:08:36.349170 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 07:08:36.349181 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 07:08:36.349205 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.349216 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 07:08:36.349227 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 07:08:36.349238 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.349248 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 07:08:36.349259 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 07:08:36.349270 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.349281 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 07:08:36.349291 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 07:08:36.349302 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.349349 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 07:08:36.349361 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 07:08:36.349372 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.349383 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 07:08:36.349394 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 07:08:36.349405 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.349416 | orchestrator | 2026-04-16 07:08:36.349427 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-04-16 07:08:36.349438 | orchestrator | Thursday 16 April 2026 07:08:21 +0000 (0:00:01.931) 0:00:23.835 ******** 2026-04-16 07:08:36.349448 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.349529 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.349541 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.349552 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.349563 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.349574 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.349584 | orchestrator | 2026-04-16 07:08:36.349595 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-04-16 07:08:36.349608 | orchestrator | Thursday 16 April 2026 07:08:23 +0000 (0:00:01.993) 0:00:25.829 ******** 2026-04-16 07:08:36.349619 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:08:36.349630 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:08:36.349641 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:08:36.349652 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:08:36.349663 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:08:36.349673 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:08:36.349684 | orchestrator | 2026-04-16 07:08:36.349695 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-04-16 07:08:36.349706 | orchestrator | Thursday 16 April 2026 07:08:25 +0000 (0:00:01.889) 0:00:27.718 ******** 2026-04-16 07:08:36.349716 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:08:36.349727 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:08:36.349738 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:08:36.349748 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:08:36.349759 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:08:36.349769 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:08:36.349780 | orchestrator | 2026-04-16 07:08:36.349791 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-04-16 07:08:36.349802 | orchestrator | Thursday 16 April 2026 07:08:28 +0000 (0:00:02.769) 0:00:30.488 ******** 2026-04-16 07:08:36.349813 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.349828 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.349840 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.349851 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.349861 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.349872 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.349889 | orchestrator | 2026-04-16 07:08:36.349900 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-04-16 07:08:36.349911 | orchestrator | Thursday 16 April 2026 07:08:30 +0000 (0:00:01.766) 0:00:32.255 ******** 2026-04-16 07:08:36.349930 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.349941 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.349952 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.349963 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.349973 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.349984 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.349995 | orchestrator | 2026-04-16 07:08:36.350006 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-04-16 07:08:36.350079 | orchestrator | Thursday 16 April 2026 07:08:32 +0000 (0:00:02.118) 0:00:34.374 ******** 2026-04-16 07:08:36.350094 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.350105 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.350115 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.350126 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.350136 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.350147 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.350157 | orchestrator | 2026-04-16 07:08:36.350168 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-04-16 07:08:36.350179 | orchestrator | Thursday 16 April 2026 07:08:34 +0000 (0:00:01.901) 0:00:36.276 ******** 2026-04-16 07:08:36.350190 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-04-16 07:08:36.350201 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-04-16 07:08:36.350211 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.350222 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-04-16 07:08:36.350233 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-04-16 07:08:36.350243 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.350254 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-04-16 07:08:36.350265 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-04-16 07:08:36.350275 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:08:36.350286 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-04-16 07:08:36.350297 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-04-16 07:08:36.350307 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:08:36.350318 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-04-16 07:08:36.350328 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-04-16 07:08:36.350339 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:08:36.350350 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-04-16 07:08:36.350360 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-04-16 07:08:36.350371 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:08:36.350382 | orchestrator | 2026-04-16 07:08:36.350401 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-04-16 07:08:36.350427 | orchestrator | Thursday 16 April 2026 07:08:36 +0000 (0:00:01.909) 0:00:38.185 ******** 2026-04-16 07:08:36.350446 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:08:36.350465 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:08:36.350574 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:10:26.422238 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:10:26.422381 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.422410 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.422431 | orchestrator | 2026-04-16 07:10:26.422446 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-04-16 07:10:26.422459 | orchestrator | Thursday 16 April 2026 07:08:37 +0000 (0:00:01.672) 0:00:39.858 ******** 2026-04-16 07:10:26.422471 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:10:26.422510 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:10:26.422620 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:10:26.422632 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:10:26.422642 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.422653 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.422664 | orchestrator | 2026-04-16 07:10:26.422678 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-04-16 07:10:26.422691 | orchestrator | 2026-04-16 07:10:26.422703 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-04-16 07:10:26.422724 | orchestrator | Thursday 16 April 2026 07:08:40 +0000 (0:00:02.681) 0:00:42.540 ******** 2026-04-16 07:10:26.422742 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.422762 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.422781 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.422799 | orchestrator | 2026-04-16 07:10:26.422817 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-04-16 07:10:26.422836 | orchestrator | Thursday 16 April 2026 07:08:42 +0000 (0:00:02.434) 0:00:44.975 ******** 2026-04-16 07:10:26.422847 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.422858 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.422869 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.422879 | orchestrator | 2026-04-16 07:10:26.422890 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-04-16 07:10:26.422901 | orchestrator | Thursday 16 April 2026 07:08:45 +0000 (0:00:03.075) 0:00:48.050 ******** 2026-04-16 07:10:26.422913 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:10:26.422924 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:10:26.422935 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:10:26.422946 | orchestrator | 2026-04-16 07:10:26.422956 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-04-16 07:10:26.422967 | orchestrator | Thursday 16 April 2026 07:08:48 +0000 (0:00:02.154) 0:00:50.205 ******** 2026-04-16 07:10:26.422978 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.422989 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.422999 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.423010 | orchestrator | 2026-04-16 07:10:26.423021 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-04-16 07:10:26.423032 | orchestrator | Thursday 16 April 2026 07:08:49 +0000 (0:00:01.882) 0:00:52.088 ******** 2026-04-16 07:10:26.423042 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:10:26.423053 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.423064 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.423075 | orchestrator | 2026-04-16 07:10:26.423085 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-04-16 07:10:26.423096 | orchestrator | Thursday 16 April 2026 07:08:51 +0000 (0:00:01.524) 0:00:53.613 ******** 2026-04-16 07:10:26.423107 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.423117 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.423128 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.423139 | orchestrator | 2026-04-16 07:10:26.423150 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-04-16 07:10:26.423160 | orchestrator | Thursday 16 April 2026 07:08:53 +0000 (0:00:01.867) 0:00:55.480 ******** 2026-04-16 07:10:26.423171 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.423181 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.423192 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.423203 | orchestrator | 2026-04-16 07:10:26.423214 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-04-16 07:10:26.423225 | orchestrator | Thursday 16 April 2026 07:08:55 +0000 (0:00:02.128) 0:00:57.609 ******** 2026-04-16 07:10:26.423235 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:10:26.423247 | orchestrator | 2026-04-16 07:10:26.423257 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-04-16 07:10:26.423279 | orchestrator | Thursday 16 April 2026 07:08:57 +0000 (0:00:01.711) 0:00:59.320 ******** 2026-04-16 07:10:26.423290 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.423301 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.423311 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.423322 | orchestrator | 2026-04-16 07:10:26.423333 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-04-16 07:10:26.423344 | orchestrator | Thursday 16 April 2026 07:08:59 +0000 (0:00:02.261) 0:01:01.581 ******** 2026-04-16 07:10:26.423355 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.423366 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.423376 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.423387 | orchestrator | 2026-04-16 07:10:26.423398 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-04-16 07:10:26.423409 | orchestrator | Thursday 16 April 2026 07:09:01 +0000 (0:00:01.720) 0:01:03.302 ******** 2026-04-16 07:10:26.423419 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.423430 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.423441 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:10:26.423452 | orchestrator | 2026-04-16 07:10:26.423463 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-04-16 07:10:26.423473 | orchestrator | Thursday 16 April 2026 07:09:03 +0000 (0:00:01.871) 0:01:05.174 ******** 2026-04-16 07:10:26.423485 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.423495 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.423506 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:10:26.423540 | orchestrator | 2026-04-16 07:10:26.423551 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-04-16 07:10:26.423562 | orchestrator | Thursday 16 April 2026 07:09:05 +0000 (0:00:02.688) 0:01:07.862 ******** 2026-04-16 07:10:26.423573 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:10:26.423584 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.423615 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.423627 | orchestrator | 2026-04-16 07:10:26.423637 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-04-16 07:10:26.423648 | orchestrator | Thursday 16 April 2026 07:09:07 +0000 (0:00:01.330) 0:01:09.192 ******** 2026-04-16 07:10:26.423659 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:10:26.423670 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.423681 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.423692 | orchestrator | 2026-04-16 07:10:26.423703 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-04-16 07:10:26.423713 | orchestrator | Thursday 16 April 2026 07:09:08 +0000 (0:00:01.390) 0:01:10.583 ******** 2026-04-16 07:10:26.423743 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:10:26.423754 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:10:26.423765 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:10:26.423775 | orchestrator | 2026-04-16 07:10:26.423786 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-04-16 07:10:26.423797 | orchestrator | Thursday 16 April 2026 07:09:10 +0000 (0:00:02.350) 0:01:12.933 ******** 2026-04-16 07:10:26.423808 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.423818 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.423829 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.423840 | orchestrator | 2026-04-16 07:10:26.423851 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-04-16 07:10:26.423862 | orchestrator | Thursday 16 April 2026 07:09:12 +0000 (0:00:01.923) 0:01:14.857 ******** 2026-04-16 07:10:26.423873 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.423884 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.423894 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.423905 | orchestrator | 2026-04-16 07:10:26.423916 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-04-16 07:10:26.423928 | orchestrator | Thursday 16 April 2026 07:09:14 +0000 (0:00:01.349) 0:01:16.206 ******** 2026-04-16 07:10:26.423947 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-16 07:10:26.423960 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-16 07:10:26.423971 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-04-16 07:10:26.423982 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-16 07:10:26.423993 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-16 07:10:26.424004 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-04-16 07:10:26.424015 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-16 07:10:26.424026 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-16 07:10:26.424037 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-04-16 07:10:26.424048 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.424059 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.424070 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.424081 | orchestrator | 2026-04-16 07:10:26.424092 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-04-16 07:10:26.424103 | orchestrator | Thursday 16 April 2026 07:09:47 +0000 (0:00:33.667) 0:01:49.874 ******** 2026-04-16 07:10:26.424114 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:10:26.424125 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:10:26.424135 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:10:26.424146 | orchestrator | 2026-04-16 07:10:26.424157 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-04-16 07:10:26.424168 | orchestrator | Thursday 16 April 2026 07:09:49 +0000 (0:00:01.379) 0:01:51.254 ******** 2026-04-16 07:10:26.424179 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:10:26.424190 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:10:26.424201 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:10:26.424212 | orchestrator | 2026-04-16 07:10:26.424223 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-04-16 07:10:26.424233 | orchestrator | Thursday 16 April 2026 07:09:51 +0000 (0:00:02.046) 0:01:53.300 ******** 2026-04-16 07:10:26.424244 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.424255 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.424266 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.424277 | orchestrator | 2026-04-16 07:10:26.424288 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-04-16 07:10:26.424298 | orchestrator | Thursday 16 April 2026 07:09:53 +0000 (0:00:02.215) 0:01:55.516 ******** 2026-04-16 07:10:26.424309 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:10:26.424320 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:10:26.424331 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:10:26.424342 | orchestrator | 2026-04-16 07:10:26.424353 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-04-16 07:10:26.424364 | orchestrator | Thursday 16 April 2026 07:10:24 +0000 (0:00:31.059) 0:02:26.575 ******** 2026-04-16 07:10:26.424375 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:10:26.424385 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:10:26.424396 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:10:26.424407 | orchestrator | 2026-04-16 07:10:26.424418 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-04-16 07:10:26.424443 | orchestrator | Thursday 16 April 2026 07:10:26 +0000 (0:00:02.000) 0:02:28.576 ******** 2026-04-16 07:11:14.155592 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:11:14.155685 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:11:14.155694 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:11:14.155700 | orchestrator | 2026-04-16 07:11:14.155707 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-04-16 07:11:14.155714 | orchestrator | Thursday 16 April 2026 07:10:28 +0000 (0:00:01.737) 0:02:30.313 ******** 2026-04-16 07:11:14.155720 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:11:14.155727 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:11:14.155733 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:11:14.155738 | orchestrator | 2026-04-16 07:11:14.155744 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-04-16 07:11:14.155750 | orchestrator | Thursday 16 April 2026 07:10:29 +0000 (0:00:01.700) 0:02:32.014 ******** 2026-04-16 07:11:14.155755 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:11:14.155761 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:11:14.155766 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:11:14.155772 | orchestrator | 2026-04-16 07:11:14.155777 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-04-16 07:11:14.155782 | orchestrator | Thursday 16 April 2026 07:10:31 +0000 (0:00:01.644) 0:02:33.659 ******** 2026-04-16 07:11:14.155788 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:11:14.155793 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:11:14.155799 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:11:14.155804 | orchestrator | 2026-04-16 07:11:14.155813 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-04-16 07:11:14.155823 | orchestrator | Thursday 16 April 2026 07:10:33 +0000 (0:00:01.536) 0:02:35.195 ******** 2026-04-16 07:11:14.155831 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:11:14.155841 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:11:14.155850 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:11:14.155859 | orchestrator | 2026-04-16 07:11:14.155868 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-04-16 07:11:14.155877 | orchestrator | Thursday 16 April 2026 07:10:34 +0000 (0:00:01.650) 0:02:36.846 ******** 2026-04-16 07:11:14.155886 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:11:14.155895 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:11:14.155904 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:11:14.155914 | orchestrator | 2026-04-16 07:11:14.155924 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-04-16 07:11:14.155934 | orchestrator | Thursday 16 April 2026 07:10:36 +0000 (0:00:01.750) 0:02:38.596 ******** 2026-04-16 07:11:14.155941 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:11:14.155947 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:11:14.155952 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:11:14.155958 | orchestrator | 2026-04-16 07:11:14.155963 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-04-16 07:11:14.155968 | orchestrator | Thursday 16 April 2026 07:10:38 +0000 (0:00:01.812) 0:02:40.408 ******** 2026-04-16 07:11:14.155974 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:11:14.155980 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:11:14.155985 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:11:14.155990 | orchestrator | 2026-04-16 07:11:14.155996 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-04-16 07:11:14.156001 | orchestrator | Thursday 16 April 2026 07:10:40 +0000 (0:00:01.828) 0:02:42.237 ******** 2026-04-16 07:11:14.156006 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:11:14.156012 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:11:14.156019 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:11:14.156028 | orchestrator | 2026-04-16 07:11:14.156035 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-04-16 07:11:14.156049 | orchestrator | Thursday 16 April 2026 07:10:41 +0000 (0:00:01.328) 0:02:43.565 ******** 2026-04-16 07:11:14.156084 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:11:14.156092 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:11:14.156101 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:11:14.156110 | orchestrator | 2026-04-16 07:11:14.156118 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-04-16 07:11:14.156126 | orchestrator | Thursday 16 April 2026 07:10:42 +0000 (0:00:01.320) 0:02:44.886 ******** 2026-04-16 07:11:14.156135 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:11:14.156144 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:11:14.156154 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:11:14.156164 | orchestrator | 2026-04-16 07:11:14.156172 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-04-16 07:11:14.156182 | orchestrator | Thursday 16 April 2026 07:10:44 +0000 (0:00:01.950) 0:02:46.836 ******** 2026-04-16 07:11:14.156190 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:11:14.156196 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:11:14.156202 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:11:14.156209 | orchestrator | 2026-04-16 07:11:14.156216 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-04-16 07:11:14.156223 | orchestrator | Thursday 16 April 2026 07:10:46 +0000 (0:00:01.650) 0:02:48.487 ******** 2026-04-16 07:11:14.156230 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-16 07:11:14.156236 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-16 07:11:14.156242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-04-16 07:11:14.156249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-16 07:11:14.156269 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-16 07:11:14.156275 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-04-16 07:11:14.156285 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-16 07:11:14.156292 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-16 07:11:14.156312 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-04-16 07:11:14.156319 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-04-16 07:11:14.156326 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-16 07:11:14.156332 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-16 07:11:14.156338 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-04-16 07:11:14.156344 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-16 07:11:14.156350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-16 07:11:14.156356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-04-16 07:11:14.156362 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-16 07:11:14.156369 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-16 07:11:14.156375 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-04-16 07:11:14.156381 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-04-16 07:11:14.156387 | orchestrator | 2026-04-16 07:11:14.156393 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-04-16 07:11:14.156406 | orchestrator | 2026-04-16 07:11:14.156412 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-04-16 07:11:14.156418 | orchestrator | Thursday 16 April 2026 07:10:50 +0000 (0:00:04.481) 0:02:52.968 ******** 2026-04-16 07:11:14.156424 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:11:14.156430 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:11:14.156436 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:11:14.156443 | orchestrator | 2026-04-16 07:11:14.156449 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-04-16 07:11:14.156455 | orchestrator | Thursday 16 April 2026 07:10:52 +0000 (0:00:01.584) 0:02:54.552 ******** 2026-04-16 07:11:14.156461 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:11:14.156467 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:11:14.156472 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:11:14.156477 | orchestrator | 2026-04-16 07:11:14.156483 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-04-16 07:11:14.156488 | orchestrator | Thursday 16 April 2026 07:10:54 +0000 (0:00:01.693) 0:02:56.246 ******** 2026-04-16 07:11:14.156494 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:11:14.156502 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:11:14.156511 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:11:14.156562 | orchestrator | 2026-04-16 07:11:14.156575 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-04-16 07:11:14.156585 | orchestrator | Thursday 16 April 2026 07:10:55 +0000 (0:00:01.388) 0:02:57.635 ******** 2026-04-16 07:11:14.156593 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:11:14.156598 | orchestrator | 2026-04-16 07:11:14.156604 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-04-16 07:11:14.156609 | orchestrator | Thursday 16 April 2026 07:10:57 +0000 (0:00:01.911) 0:02:59.547 ******** 2026-04-16 07:11:14.156615 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:11:14.156620 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:11:14.156625 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:11:14.156631 | orchestrator | 2026-04-16 07:11:14.156636 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-04-16 07:11:14.156642 | orchestrator | Thursday 16 April 2026 07:10:58 +0000 (0:00:01.359) 0:03:00.906 ******** 2026-04-16 07:11:14.156647 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:11:14.156652 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:11:14.156658 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:11:14.156663 | orchestrator | 2026-04-16 07:11:14.156668 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-04-16 07:11:14.156674 | orchestrator | Thursday 16 April 2026 07:11:00 +0000 (0:00:01.358) 0:03:02.265 ******** 2026-04-16 07:11:14.156679 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:11:14.156685 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:11:14.156690 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:11:14.156699 | orchestrator | 2026-04-16 07:11:14.156708 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-04-16 07:11:14.156717 | orchestrator | Thursday 16 April 2026 07:11:01 +0000 (0:00:01.556) 0:03:03.822 ******** 2026-04-16 07:11:14.156726 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:11:14.156735 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:11:14.156743 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:11:14.156752 | orchestrator | 2026-04-16 07:11:14.156760 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-04-16 07:11:14.156768 | orchestrator | Thursday 16 April 2026 07:11:03 +0000 (0:00:01.700) 0:03:05.522 ******** 2026-04-16 07:11:14.156776 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:11:14.156784 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:11:14.156793 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:11:14.156801 | orchestrator | 2026-04-16 07:11:14.156809 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-04-16 07:11:14.156818 | orchestrator | Thursday 16 April 2026 07:11:05 +0000 (0:00:02.201) 0:03:07.723 ******** 2026-04-16 07:11:14.156835 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:11:14.156844 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:11:14.156858 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:11:14.156868 | orchestrator | 2026-04-16 07:11:14.156876 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-04-16 07:11:14.156885 | orchestrator | Thursday 16 April 2026 07:11:08 +0000 (0:00:02.484) 0:03:10.208 ******** 2026-04-16 07:11:14.156903 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:12:21.801954 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:12:21.802130 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:12:21.802150 | orchestrator | 2026-04-16 07:12:21.802163 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-04-16 07:12:21.802176 | orchestrator | 2026-04-16 07:12:21.802188 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-04-16 07:12:21.802200 | orchestrator | Thursday 16 April 2026 07:11:16 +0000 (0:00:08.010) 0:03:18.218 ******** 2026-04-16 07:12:21.802211 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802223 | orchestrator | 2026-04-16 07:12:21.802234 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-04-16 07:12:21.802245 | orchestrator | Thursday 16 April 2026 07:11:18 +0000 (0:00:02.151) 0:03:20.370 ******** 2026-04-16 07:12:21.802256 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802267 | orchestrator | 2026-04-16 07:12:21.802277 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-04-16 07:12:21.802288 | orchestrator | Thursday 16 April 2026 07:11:19 +0000 (0:00:01.388) 0:03:21.758 ******** 2026-04-16 07:12:21.802300 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-16 07:12:21.802312 | orchestrator | 2026-04-16 07:12:21.802323 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-04-16 07:12:21.802334 | orchestrator | Thursday 16 April 2026 07:11:21 +0000 (0:00:01.803) 0:03:23.562 ******** 2026-04-16 07:12:21.802345 | orchestrator | changed: [testbed-manager] 2026-04-16 07:12:21.802356 | orchestrator | 2026-04-16 07:12:21.802367 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-04-16 07:12:21.802378 | orchestrator | Thursday 16 April 2026 07:11:23 +0000 (0:00:01.783) 0:03:25.346 ******** 2026-04-16 07:12:21.802389 | orchestrator | changed: [testbed-manager] 2026-04-16 07:12:21.802400 | orchestrator | 2026-04-16 07:12:21.802410 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-04-16 07:12:21.802422 | orchestrator | Thursday 16 April 2026 07:11:24 +0000 (0:00:01.537) 0:03:26.884 ******** 2026-04-16 07:12:21.802433 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-16 07:12:21.802444 | orchestrator | 2026-04-16 07:12:21.802458 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-04-16 07:12:21.802478 | orchestrator | Thursday 16 April 2026 07:11:27 +0000 (0:00:02.999) 0:03:29.883 ******** 2026-04-16 07:12:21.802496 | orchestrator | changed: [testbed-manager -> localhost] 2026-04-16 07:12:21.802515 | orchestrator | 2026-04-16 07:12:21.802558 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-04-16 07:12:21.802577 | orchestrator | Thursday 16 April 2026 07:11:29 +0000 (0:00:01.927) 0:03:31.811 ******** 2026-04-16 07:12:21.802596 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802616 | orchestrator | 2026-04-16 07:12:21.802636 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-04-16 07:12:21.802655 | orchestrator | Thursday 16 April 2026 07:11:31 +0000 (0:00:01.381) 0:03:33.193 ******** 2026-04-16 07:12:21.802673 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802693 | orchestrator | 2026-04-16 07:12:21.802707 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-04-16 07:12:21.802720 | orchestrator | 2026-04-16 07:12:21.802732 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-04-16 07:12:21.802744 | orchestrator | Thursday 16 April 2026 07:11:32 +0000 (0:00:01.952) 0:03:35.145 ******** 2026-04-16 07:12:21.802785 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802798 | orchestrator | 2026-04-16 07:12:21.802811 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-04-16 07:12:21.802824 | orchestrator | Thursday 16 April 2026 07:11:34 +0000 (0:00:01.133) 0:03:36.279 ******** 2026-04-16 07:12:21.802836 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 07:12:21.802849 | orchestrator | 2026-04-16 07:12:21.802860 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-04-16 07:12:21.802870 | orchestrator | Thursday 16 April 2026 07:11:35 +0000 (0:00:01.461) 0:03:37.741 ******** 2026-04-16 07:12:21.802881 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802892 | orchestrator | 2026-04-16 07:12:21.802902 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-04-16 07:12:21.802913 | orchestrator | Thursday 16 April 2026 07:11:37 +0000 (0:00:01.752) 0:03:39.493 ******** 2026-04-16 07:12:21.802924 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802934 | orchestrator | 2026-04-16 07:12:21.802945 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-04-16 07:12:21.802955 | orchestrator | Thursday 16 April 2026 07:11:39 +0000 (0:00:02.537) 0:03:42.030 ******** 2026-04-16 07:12:21.802966 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.802977 | orchestrator | 2026-04-16 07:12:21.802987 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-04-16 07:12:21.802998 | orchestrator | Thursday 16 April 2026 07:11:41 +0000 (0:00:01.421) 0:03:43.452 ******** 2026-04-16 07:12:21.803009 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.803019 | orchestrator | 2026-04-16 07:12:21.803030 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-04-16 07:12:21.803041 | orchestrator | Thursday 16 April 2026 07:11:42 +0000 (0:00:01.434) 0:03:44.886 ******** 2026-04-16 07:12:21.803051 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.803062 | orchestrator | 2026-04-16 07:12:21.803073 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-04-16 07:12:21.803083 | orchestrator | Thursday 16 April 2026 07:11:44 +0000 (0:00:01.600) 0:03:46.486 ******** 2026-04-16 07:12:21.803094 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.803104 | orchestrator | 2026-04-16 07:12:21.803115 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-04-16 07:12:21.803126 | orchestrator | Thursday 16 April 2026 07:11:46 +0000 (0:00:02.462) 0:03:48.949 ******** 2026-04-16 07:12:21.803153 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:21.803165 | orchestrator | 2026-04-16 07:12:21.803175 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-04-16 07:12:21.803186 | orchestrator | 2026-04-16 07:12:21.803197 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-04-16 07:12:21.803228 | orchestrator | Thursday 16 April 2026 07:11:48 +0000 (0:00:01.702) 0:03:50.651 ******** 2026-04-16 07:12:21.803240 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:12:21.803251 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:12:21.803262 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:12:21.803272 | orchestrator | 2026-04-16 07:12:21.803283 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-04-16 07:12:21.803294 | orchestrator | Thursday 16 April 2026 07:11:49 +0000 (0:00:01.336) 0:03:51.987 ******** 2026-04-16 07:12:21.803305 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:21.803316 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:12:21.803327 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:12:21.803338 | orchestrator | 2026-04-16 07:12:21.803349 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-04-16 07:12:21.803359 | orchestrator | Thursday 16 April 2026 07:11:51 +0000 (0:00:01.397) 0:03:53.385 ******** 2026-04-16 07:12:21.803370 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:12:21.803390 | orchestrator | 2026-04-16 07:12:21.803401 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-04-16 07:12:21.803412 | orchestrator | Thursday 16 April 2026 07:11:53 +0000 (0:00:01.997) 0:03:55.383 ******** 2026-04-16 07:12:21.803424 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.803443 | orchestrator | 2026-04-16 07:12:21.803461 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-04-16 07:12:21.803479 | orchestrator | Thursday 16 April 2026 07:11:55 +0000 (0:00:01.882) 0:03:57.266 ******** 2026-04-16 07:12:21.803496 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.803513 | orchestrator | 2026-04-16 07:12:21.803556 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-04-16 07:12:21.803576 | orchestrator | Thursday 16 April 2026 07:11:57 +0000 (0:00:01.908) 0:03:59.174 ******** 2026-04-16 07:12:21.803593 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:21.803611 | orchestrator | 2026-04-16 07:12:21.803629 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-04-16 07:12:21.803646 | orchestrator | Thursday 16 April 2026 07:11:58 +0000 (0:00:01.162) 0:04:00.337 ******** 2026-04-16 07:12:21.803662 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.803680 | orchestrator | 2026-04-16 07:12:21.803698 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-04-16 07:12:21.803717 | orchestrator | Thursday 16 April 2026 07:12:00 +0000 (0:00:01.999) 0:04:02.336 ******** 2026-04-16 07:12:21.803736 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.803756 | orchestrator | 2026-04-16 07:12:21.803767 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-04-16 07:12:21.803778 | orchestrator | Thursday 16 April 2026 07:12:02 +0000 (0:00:02.269) 0:04:04.606 ******** 2026-04-16 07:12:21.803789 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.803799 | orchestrator | 2026-04-16 07:12:21.803810 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-04-16 07:12:21.803820 | orchestrator | Thursday 16 April 2026 07:12:03 +0000 (0:00:01.130) 0:04:05.737 ******** 2026-04-16 07:12:21.803831 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.803842 | orchestrator | 2026-04-16 07:12:21.803852 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-04-16 07:12:21.803863 | orchestrator | Thursday 16 April 2026 07:12:04 +0000 (0:00:01.128) 0:04:06.865 ******** 2026-04-16 07:12:21.803874 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-04-16 07:12:21.803885 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-04-16 07:12:21.803897 | orchestrator | } 2026-04-16 07:12:21.803907 | orchestrator | 2026-04-16 07:12:21.803918 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-04-16 07:12:21.803930 | orchestrator | Thursday 16 April 2026 07:12:05 +0000 (0:00:01.144) 0:04:08.010 ******** 2026-04-16 07:12:21.803949 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:21.803966 | orchestrator | 2026-04-16 07:12:21.803983 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-04-16 07:12:21.804000 | orchestrator | Thursday 16 April 2026 07:12:06 +0000 (0:00:01.116) 0:04:09.126 ******** 2026-04-16 07:12:21.804019 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-04-16 07:12:21.804038 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-04-16 07:12:21.804053 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-04-16 07:12:21.804064 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-04-16 07:12:21.804075 | orchestrator | 2026-04-16 07:12:21.804085 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-04-16 07:12:21.804096 | orchestrator | Thursday 16 April 2026 07:12:12 +0000 (0:00:05.579) 0:04:14.706 ******** 2026-04-16 07:12:21.804106 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.804128 | orchestrator | 2026-04-16 07:12:21.804138 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-04-16 07:12:21.804149 | orchestrator | Thursday 16 April 2026 07:12:14 +0000 (0:00:02.291) 0:04:16.998 ******** 2026-04-16 07:12:21.804160 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.804171 | orchestrator | 2026-04-16 07:12:21.804181 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-04-16 07:12:21.804192 | orchestrator | Thursday 16 April 2026 07:12:17 +0000 (0:00:02.623) 0:04:19.621 ******** 2026-04-16 07:12:21.804202 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-04-16 07:12:21.804213 | orchestrator | 2026-04-16 07:12:21.804231 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-04-16 07:12:21.804242 | orchestrator | Thursday 16 April 2026 07:12:21 +0000 (0:00:04.176) 0:04:23.797 ******** 2026-04-16 07:12:21.804253 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:21.804264 | orchestrator | 2026-04-16 07:12:21.804285 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-04-16 07:12:52.130571 | orchestrator | Thursday 16 April 2026 07:12:22 +0000 (0:00:01.107) 0:04:24.904 ******** 2026-04-16 07:12:52.130681 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-04-16 07:12:52.130700 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-04-16 07:12:52.130715 | orchestrator | 2026-04-16 07:12:52.130728 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-04-16 07:12:52.130741 | orchestrator | Thursday 16 April 2026 07:12:25 +0000 (0:00:02.992) 0:04:27.897 ******** 2026-04-16 07:12:52.130752 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:52.130764 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:12:52.130775 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:12:52.130786 | orchestrator | 2026-04-16 07:12:52.130797 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-04-16 07:12:52.130808 | orchestrator | Thursday 16 April 2026 07:12:27 +0000 (0:00:01.342) 0:04:29.240 ******** 2026-04-16 07:12:52.130819 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:12:52.130831 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:12:52.130842 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:12:52.130853 | orchestrator | 2026-04-16 07:12:52.130863 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-04-16 07:12:52.130874 | orchestrator | 2026-04-16 07:12:52.130885 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-04-16 07:12:52.130896 | orchestrator | Thursday 16 April 2026 07:12:29 +0000 (0:00:02.104) 0:04:31.344 ******** 2026-04-16 07:12:52.130908 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:52.130918 | orchestrator | 2026-04-16 07:12:52.130929 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-04-16 07:12:52.130940 | orchestrator | Thursday 16 April 2026 07:12:30 +0000 (0:00:01.170) 0:04:32.514 ******** 2026-04-16 07:12:52.130951 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-04-16 07:12:52.130963 | orchestrator | 2026-04-16 07:12:52.130974 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-04-16 07:12:52.130985 | orchestrator | Thursday 16 April 2026 07:12:31 +0000 (0:00:01.518) 0:04:34.033 ******** 2026-04-16 07:12:52.130996 | orchestrator | ok: [testbed-manager] 2026-04-16 07:12:52.131007 | orchestrator | 2026-04-16 07:12:52.131018 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-04-16 07:12:52.131028 | orchestrator | 2026-04-16 07:12:52.131039 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-04-16 07:12:52.131050 | orchestrator | Thursday 16 April 2026 07:12:36 +0000 (0:00:05.080) 0:04:39.113 ******** 2026-04-16 07:12:52.131061 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:12:52.131072 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:12:52.131082 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:12:52.131093 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:12:52.131124 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:12:52.131136 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:12:52.131147 | orchestrator | 2026-04-16 07:12:52.131158 | orchestrator | TASK [Manage labels] *********************************************************** 2026-04-16 07:12:52.131169 | orchestrator | Thursday 16 April 2026 07:12:38 +0000 (0:00:01.795) 0:04:40.909 ******** 2026-04-16 07:12:52.131180 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-16 07:12:52.131191 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-16 07:12:52.131202 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-04-16 07:12:52.131213 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-16 07:12:52.131223 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-16 07:12:52.131234 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-16 07:12:52.131245 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-16 07:12:52.131256 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-04-16 07:12:52.131267 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-04-16 07:12:52.131278 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-16 07:12:52.131289 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-16 07:12:52.131299 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-16 07:12:52.131310 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-16 07:12:52.131321 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-04-16 07:12:52.131332 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-04-16 07:12:52.131343 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-16 07:12:52.131354 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-16 07:12:52.131364 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-04-16 07:12:52.131375 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-16 07:12:52.131386 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-16 07:12:52.131397 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-04-16 07:12:52.131425 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-16 07:12:52.131436 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-16 07:12:52.131447 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-04-16 07:12:52.131458 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-16 07:12:52.131469 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-16 07:12:52.131479 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-04-16 07:12:52.131490 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-16 07:12:52.131501 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-16 07:12:52.131511 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-04-16 07:12:52.131522 | orchestrator | 2026-04-16 07:12:52.131589 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-04-16 07:12:52.131609 | orchestrator | Thursday 16 April 2026 07:12:47 +0000 (0:00:09.120) 0:04:50.030 ******** 2026-04-16 07:12:52.131630 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:12:52.131642 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:12:52.131653 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:12:52.131664 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:52.131674 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:12:52.131685 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:12:52.131696 | orchestrator | 2026-04-16 07:12:52.131707 | orchestrator | TASK [Manage taints] *********************************************************** 2026-04-16 07:12:52.131718 | orchestrator | Thursday 16 April 2026 07:12:49 +0000 (0:00:01.615) 0:04:51.646 ******** 2026-04-16 07:12:52.131729 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:12:52.131740 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:12:52.131750 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:12:52.131761 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:12:52.131772 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:12:52.131783 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:12:52.131793 | orchestrator | 2026-04-16 07:12:52.131804 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:12:52.131816 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 07:12:52.131829 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 07:12:52.131841 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 07:12:52.131852 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 07:12:52.131863 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 07:12:52.131873 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 07:12:52.131884 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 07:12:52.131895 | orchestrator | 2026-04-16 07:12:52.131906 | orchestrator | 2026-04-16 07:12:52.131917 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:12:52.131928 | orchestrator | Thursday 16 April 2026 07:12:52 +0000 (0:00:02.618) 0:04:54.265 ******** 2026-04-16 07:12:52.131939 | orchestrator | =============================================================================== 2026-04-16 07:12:52.131949 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 33.67s 2026-04-16 07:12:52.131960 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 31.06s 2026-04-16 07:12:52.131971 | orchestrator | Manage labels ----------------------------------------------------------- 9.12s 2026-04-16 07:12:52.131982 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.01s 2026-04-16 07:12:52.131993 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.58s 2026-04-16 07:12:52.132004 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.08s 2026-04-16 07:12:52.132014 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.48s 2026-04-16 07:12:52.132026 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.40s 2026-04-16 07:12:52.132036 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.18s 2026-04-16 07:12:52.132057 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 3.08s 2026-04-16 07:12:52.132068 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 3.00s 2026-04-16 07:12:52.132079 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.99s 2026-04-16 07:12:52.132096 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.99s 2026-04-16 07:12:52.477376 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.97s 2026-04-16 07:12:52.477485 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.77s 2026-04-16 07:12:52.477501 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.69s 2026-04-16 07:12:52.477513 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.68s 2026-04-16 07:12:52.477526 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.62s 2026-04-16 07:12:52.477594 | orchestrator | Manage taints ----------------------------------------------------------- 2.62s 2026-04-16 07:12:52.477607 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.54s 2026-04-16 07:12:52.666217 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-16 07:12:52.666289 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-04-16 07:12:52.673762 | orchestrator | + set -e 2026-04-16 07:12:52.673824 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 07:12:52.673831 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 07:12:52.673837 | orchestrator | ++ INTERACTIVE=false 2026-04-16 07:12:52.673842 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 07:12:52.673847 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 07:12:52.673852 | orchestrator | + osism apply openstackclient 2026-04-16 07:13:04.040177 | orchestrator | 2026-04-16 07:13:04 | INFO  | Prepare task for execution of openstackclient. 2026-04-16 07:13:04.114815 | orchestrator | 2026-04-16 07:13:04 | INFO  | Task c37a8edb-9c1a-44a9-b8a8-47a50c95b6ed (openstackclient) was prepared for execution. 2026-04-16 07:13:04.114886 | orchestrator | 2026-04-16 07:13:04 | INFO  | It takes a moment until task c37a8edb-9c1a-44a9-b8a8-47a50c95b6ed (openstackclient) has been started and output is visible here. 2026-04-16 07:13:36.818983 | orchestrator | 2026-04-16 07:13:36.819083 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-04-16 07:13:36.819103 | orchestrator | 2026-04-16 07:13:36.819118 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-04-16 07:13:36.819132 | orchestrator | Thursday 16 April 2026 07:13:09 +0000 (0:00:01.833) 0:00:01.833 ******** 2026-04-16 07:13:36.819146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-04-16 07:13:36.819161 | orchestrator | 2026-04-16 07:13:36.819175 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-04-16 07:13:36.819189 | orchestrator | Thursday 16 April 2026 07:13:10 +0000 (0:00:01.449) 0:00:03.283 ******** 2026-04-16 07:13:36.819204 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-04-16 07:13:36.819220 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-04-16 07:13:36.819235 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-04-16 07:13:36.819245 | orchestrator | 2026-04-16 07:13:36.819253 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-04-16 07:13:36.819262 | orchestrator | Thursday 16 April 2026 07:13:13 +0000 (0:00:02.411) 0:00:05.694 ******** 2026-04-16 07:13:36.819270 | orchestrator | changed: [testbed-manager] 2026-04-16 07:13:36.819279 | orchestrator | 2026-04-16 07:13:36.819287 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-04-16 07:13:36.819295 | orchestrator | Thursday 16 April 2026 07:13:15 +0000 (0:00:02.081) 0:00:07.776 ******** 2026-04-16 07:13:36.819303 | orchestrator | ok: [testbed-manager] 2026-04-16 07:13:36.819336 | orchestrator | 2026-04-16 07:13:36.819345 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-04-16 07:13:36.819353 | orchestrator | Thursday 16 April 2026 07:13:17 +0000 (0:00:02.032) 0:00:09.808 ******** 2026-04-16 07:13:36.819361 | orchestrator | ok: [testbed-manager] 2026-04-16 07:13:36.819369 | orchestrator | 2026-04-16 07:13:36.819377 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-04-16 07:13:36.819385 | orchestrator | Thursday 16 April 2026 07:13:19 +0000 (0:00:01.938) 0:00:11.746 ******** 2026-04-16 07:13:36.819393 | orchestrator | ok: [testbed-manager] 2026-04-16 07:13:36.819401 | orchestrator | 2026-04-16 07:13:36.819409 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-04-16 07:13:36.819417 | orchestrator | Thursday 16 April 2026 07:13:20 +0000 (0:00:01.548) 0:00:13.295 ******** 2026-04-16 07:13:36.819425 | orchestrator | changed: [testbed-manager] 2026-04-16 07:13:36.819433 | orchestrator | 2026-04-16 07:13:36.819441 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-04-16 07:13:36.819449 | orchestrator | Thursday 16 April 2026 07:13:31 +0000 (0:00:10.711) 0:00:24.007 ******** 2026-04-16 07:13:36.819457 | orchestrator | changed: [testbed-manager] 2026-04-16 07:13:36.819464 | orchestrator | 2026-04-16 07:13:36.819472 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-04-16 07:13:36.819480 | orchestrator | Thursday 16 April 2026 07:13:33 +0000 (0:00:01.639) 0:00:25.646 ******** 2026-04-16 07:13:36.819488 | orchestrator | changed: [testbed-manager] 2026-04-16 07:13:36.819496 | orchestrator | 2026-04-16 07:13:36.819504 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-04-16 07:13:36.819512 | orchestrator | Thursday 16 April 2026 07:13:34 +0000 (0:00:01.545) 0:00:27.191 ******** 2026-04-16 07:13:36.819519 | orchestrator | ok: [testbed-manager] 2026-04-16 07:13:36.819527 | orchestrator | 2026-04-16 07:13:36.819535 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:13:36.819574 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 07:13:36.819585 | orchestrator | 2026-04-16 07:13:36.819594 | orchestrator | 2026-04-16 07:13:36.819603 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:13:36.819612 | orchestrator | Thursday 16 April 2026 07:13:36 +0000 (0:00:01.873) 0:00:29.064 ******** 2026-04-16 07:13:36.819622 | orchestrator | =============================================================================== 2026-04-16 07:13:36.819631 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.71s 2026-04-16 07:13:36.819640 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.41s 2026-04-16 07:13:36.819649 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.08s 2026-04-16 07:13:36.819659 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.03s 2026-04-16 07:13:36.819668 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.94s 2026-04-16 07:13:36.819677 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.87s 2026-04-16 07:13:36.819686 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.64s 2026-04-16 07:13:36.819696 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.55s 2026-04-16 07:13:36.819705 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.55s 2026-04-16 07:13:36.819714 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.45s 2026-04-16 07:13:37.002744 | orchestrator | + osism apply -a upgrade common 2026-04-16 07:13:38.270318 | orchestrator | 2026-04-16 07:13:38 | INFO  | Prepare task for execution of common. 2026-04-16 07:13:38.332780 | orchestrator | 2026-04-16 07:13:38 | INFO  | Task 3d4b781c-2929-4dae-9fb4-92673df1b971 (common) was prepared for execution. 2026-04-16 07:13:38.332906 | orchestrator | 2026-04-16 07:13:38 | INFO  | It takes a moment until task 3d4b781c-2929-4dae-9fb4-92673df1b971 (common) has been started and output is visible here. 2026-04-16 07:13:51.997635 | orchestrator | 2026-04-16 07:13:51.997721 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-16 07:13:51.997732 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 07:13:51.997740 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 07:13:51.997753 | orchestrator | 2026-04-16 07:13:51.997760 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-16 07:13:51.997766 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 07:13:51.997772 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 07:13:51.997785 | orchestrator | Thursday 16 April 2026 07:13:43 +0000 (0:00:01.920) 0:00:01.920 ******** 2026-04-16 07:13:51.997792 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:13:51.997800 | orchestrator | 2026-04-16 07:13:51.997806 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-16 07:13:51.997812 | orchestrator | Thursday 16 April 2026 07:13:45 +0000 (0:00:02.048) 0:00:03.968 ******** 2026-04-16 07:13:51.997818 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997824 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997836 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997846 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997856 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997867 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997879 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997890 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997900 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997911 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.997922 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:13:51.997953 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997961 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.997967 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997973 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.997979 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997986 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:13:51.997995 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.998001 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.998007 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.998057 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:13:51.998065 | orchestrator | 2026-04-16 07:13:51.998091 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-16 07:13:51.998097 | orchestrator | Thursday 16 April 2026 07:13:48 +0000 (0:00:02.770) 0:00:06.739 ******** 2026-04-16 07:13:51.998104 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:13:51.998113 | orchestrator | 2026-04-16 07:13:51.998121 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-16 07:13:51.998128 | orchestrator | Thursday 16 April 2026 07:13:49 +0000 (0:00:01.677) 0:00:08.417 ******** 2026-04-16 07:13:51.998138 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998172 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998181 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998189 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998197 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:51.998208 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998222 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998229 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:13:51.998241 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474216 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474321 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474350 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474376 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474408 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474420 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474432 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474461 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474474 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474485 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474737 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474758 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:13:54.474780 | orchestrator | 2026-04-16 07:13:54.474795 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-16 07:13:54.474807 | orchestrator | Thursday 16 April 2026 07:13:53 +0000 (0:00:03.770) 0:00:12.187 ******** 2026-04-16 07:13:54.474824 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:54.474839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:54.474853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:54.474889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:55.697538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697643 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:13:55.697666 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697686 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:13:55.697706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:55.697725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697788 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:13:55.697807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:55.697827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697859 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:13:55.697880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:55.697909 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:55.697951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987272 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:13:56.987363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987376 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:13:56.987386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987416 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:13:56.987425 | orchestrator | 2026-04-16 07:13:56.987434 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-16 07:13:56.987443 | orchestrator | Thursday 16 April 2026 07:13:55 +0000 (0:00:02.241) 0:00:14.429 ******** 2026-04-16 07:13:56.987454 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:56.987465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:56.987520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987610 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987620 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:13:56.987629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:56.987644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:56.987657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987674 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:13:56.987682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:13:56.987699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:13:56.987713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:03.972270 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:14:03.972380 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:14:03.972397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:03.972428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:03.972441 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:03.972453 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:14:03.972464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:03.972474 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:14:03.972484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:03.972495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:03.972526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:03.972542 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:14:03.972623 | orchestrator | 2026-04-16 07:14:03.972641 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-16 07:14:03.972657 | orchestrator | Thursday 16 April 2026 07:13:58 +0000 (0:00:02.195) 0:00:16.624 ******** 2026-04-16 07:14:03.972674 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:14:03.972689 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:14:03.972727 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:14:03.972746 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:14:03.972763 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:14:03.972780 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:14:03.972796 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:14:03.972814 | orchestrator | 2026-04-16 07:14:03.972831 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-16 07:14:03.972849 | orchestrator | Thursday 16 April 2026 07:13:58 +0000 (0:00:00.792) 0:00:17.417 ******** 2026-04-16 07:14:03.972865 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:14:03.972883 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:14:03.972900 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:14:03.972917 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:14:03.972930 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:14:03.972941 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:14:03.972951 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:14:03.972962 | orchestrator | 2026-04-16 07:14:03.972973 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-16 07:14:03.972984 | orchestrator | Thursday 16 April 2026 07:13:59 +0000 (0:00:00.695) 0:00:18.112 ******** 2026-04-16 07:14:03.972995 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:14:03.973006 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:14:03.973026 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:14:03.973037 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:14:03.973048 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:14:03.973059 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:14:03.973070 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:14:03.973081 | orchestrator | 2026-04-16 07:14:03.973093 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-16 07:14:03.973105 | orchestrator | Thursday 16 April 2026 07:14:00 +0000 (0:00:00.778) 0:00:18.891 ******** 2026-04-16 07:14:03.973117 | orchestrator | changed: [testbed-manager] 2026-04-16 07:14:03.973128 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:14:03.973138 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:14:03.973148 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:14:03.973157 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:14:03.973166 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:14:03.973176 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:14:03.973185 | orchestrator | 2026-04-16 07:14:03.973194 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-16 07:14:03.973204 | orchestrator | Thursday 16 April 2026 07:14:02 +0000 (0:00:01.811) 0:00:20.703 ******** 2026-04-16 07:14:03.973214 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:03.973239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:03.973250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:03.973260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:03.973281 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:06.214524 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:06.214540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:06.214612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214647 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214707 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:06.214782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:17.742816 | orchestrator | 2026-04-16 07:14:17.742940 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-16 07:14:17.742957 | orchestrator | Thursday 16 April 2026 07:14:06 +0000 (0:00:04.071) 0:00:24.774 ******** 2026-04-16 07:14:17.742969 | orchestrator | [WARNING]: Skipped 2026-04-16 07:14:17.742981 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-16 07:14:17.742994 | orchestrator | to this access issue: 2026-04-16 07:14:17.743022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-16 07:14:17.743033 | orchestrator | directory 2026-04-16 07:14:17.743044 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:14:17.743057 | orchestrator | 2026-04-16 07:14:17.743068 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-16 07:14:17.743104 | orchestrator | Thursday 16 April 2026 07:14:07 +0000 (0:00:01.183) 0:00:25.958 ******** 2026-04-16 07:14:17.743116 | orchestrator | [WARNING]: Skipped 2026-04-16 07:14:17.743127 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-16 07:14:17.743137 | orchestrator | to this access issue: 2026-04-16 07:14:17.743149 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-16 07:14:17.743159 | orchestrator | directory 2026-04-16 07:14:17.743170 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:14:17.743181 | orchestrator | 2026-04-16 07:14:17.743192 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-16 07:14:17.743203 | orchestrator | Thursday 16 April 2026 07:14:08 +0000 (0:00:00.977) 0:00:26.936 ******** 2026-04-16 07:14:17.743214 | orchestrator | [WARNING]: Skipped 2026-04-16 07:14:17.743224 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-16 07:14:17.743235 | orchestrator | to this access issue: 2026-04-16 07:14:17.743246 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-16 07:14:17.743257 | orchestrator | directory 2026-04-16 07:14:17.743268 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:14:17.743278 | orchestrator | 2026-04-16 07:14:17.743289 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-16 07:14:17.743300 | orchestrator | Thursday 16 April 2026 07:14:09 +0000 (0:00:00.856) 0:00:27.793 ******** 2026-04-16 07:14:17.743311 | orchestrator | [WARNING]: Skipped 2026-04-16 07:14:17.743322 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-16 07:14:17.743333 | orchestrator | to this access issue: 2026-04-16 07:14:17.743345 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-16 07:14:17.743357 | orchestrator | directory 2026-04-16 07:14:17.743370 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:14:17.743383 | orchestrator | 2026-04-16 07:14:17.743395 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-16 07:14:17.743407 | orchestrator | Thursday 16 April 2026 07:14:10 +0000 (0:00:00.822) 0:00:28.615 ******** 2026-04-16 07:14:17.743418 | orchestrator | changed: [testbed-manager] 2026-04-16 07:14:17.743429 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:14:17.743440 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:14:17.743451 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:14:17.743461 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:14:17.743472 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:14:17.743483 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:14:17.743493 | orchestrator | 2026-04-16 07:14:17.743504 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-16 07:14:17.743515 | orchestrator | Thursday 16 April 2026 07:14:12 +0000 (0:00:02.774) 0:00:31.390 ******** 2026-04-16 07:14:17.743526 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743538 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743575 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743587 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743598 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743609 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743620 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:14:17.743631 | orchestrator | 2026-04-16 07:14:17.743641 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-16 07:14:17.743660 | orchestrator | Thursday 16 April 2026 07:14:14 +0000 (0:00:01.981) 0:00:33.371 ******** 2026-04-16 07:14:17.743671 | orchestrator | ok: [testbed-manager] 2026-04-16 07:14:17.743682 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:14:17.743693 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:14:17.743704 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:14:17.743715 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:14:17.743725 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:14:17.743736 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:14:17.743747 | orchestrator | 2026-04-16 07:14:17.743758 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-16 07:14:17.743769 | orchestrator | Thursday 16 April 2026 07:14:16 +0000 (0:00:02.000) 0:00:35.371 ******** 2026-04-16 07:14:17.743803 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:17.743826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:17.743838 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:17.743849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:17.743861 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:17.743872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:17.743892 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:17.743912 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:22.680019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:22.680130 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:22.680167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:22.680182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:22.680197 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:22.680230 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:22.680244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:22.680275 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:22.680293 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:22.680305 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:22.680316 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:22.680328 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:22.680340 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:22.680358 | orchestrator | 2026-04-16 07:14:22.680372 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-16 07:14:22.680384 | orchestrator | Thursday 16 April 2026 07:14:19 +0000 (0:00:02.286) 0:00:37.658 ******** 2026-04-16 07:14:22.680395 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680407 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680418 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680428 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680439 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680450 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680460 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:14:22.680471 | orchestrator | 2026-04-16 07:14:22.680482 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-16 07:14:22.680493 | orchestrator | Thursday 16 April 2026 07:14:21 +0000 (0:00:01.789) 0:00:39.447 ******** 2026-04-16 07:14:22.680507 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:22.680519 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:22.680530 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:22.680541 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:22.680619 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:22.680642 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:25.605387 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:14:25.605471 | orchestrator | 2026-04-16 07:14:25.605481 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-16 07:14:25.605489 | orchestrator | Thursday 16 April 2026 07:14:23 +0000 (0:00:02.440) 0:00:41.888 ******** 2026-04-16 07:14:25.605512 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605523 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605666 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:25.605700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:25.605713 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:25.605723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:25.605741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:25.605753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:25.605764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:25.605782 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:28.291774 | orchestrator | 2026-04-16 07:14:28.291788 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-16 07:14:28.291800 | orchestrator | Thursday 16 April 2026 07:14:26 +0000 (0:00:03.264) 0:00:45.153 ******** 2026-04-16 07:14:28.291812 | orchestrator | changed: [testbed-manager] => { 2026-04-16 07:14:28.291824 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.291835 | orchestrator | } 2026-04-16 07:14:28.291846 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:14:28.291857 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.291868 | orchestrator | } 2026-04-16 07:14:28.291879 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:14:28.291889 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.291900 | orchestrator | } 2026-04-16 07:14:28.291911 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:14:28.291921 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.291932 | orchestrator | } 2026-04-16 07:14:28.291943 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 07:14:28.291953 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.291964 | orchestrator | } 2026-04-16 07:14:28.291975 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 07:14:28.291986 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.291996 | orchestrator | } 2026-04-16 07:14:28.292007 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 07:14:28.292018 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:14:28.292028 | orchestrator | } 2026-04-16 07:14:28.292039 | orchestrator | 2026-04-16 07:14:28.292071 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:14:28.292082 | orchestrator | Thursday 16 April 2026 07:14:27 +0000 (0:00:00.867) 0:00:46.020 ******** 2026-04-16 07:14:28.292104 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:28.292130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:28.292142 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:28.292154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:28.292165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:28.292177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:28.292188 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:14:28.292199 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:14:28.292211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:28.292235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.528898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.528992 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:14:30.529006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:30.529017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529030 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:14:30.529038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:30.529046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529087 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:14:30.529122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:30.529131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529146 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:14:30.529153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:30.529161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:30.529174 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:14:30.529180 | orchestrator | 2026-04-16 07:14:30.529188 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529203 | orchestrator | Thursday 16 April 2026 07:14:29 +0000 (0:00:02.398) 0:00:48.419 ******** 2026-04-16 07:14:30.529210 | orchestrator | 2026-04-16 07:14:30.529216 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529224 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.084) 0:00:48.503 ******** 2026-04-16 07:14:30.529230 | orchestrator | 2026-04-16 07:14:30.529236 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529242 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.070) 0:00:48.573 ******** 2026-04-16 07:14:30.529249 | orchestrator | 2026-04-16 07:14:30.529256 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529262 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.070) 0:00:48.644 ******** 2026-04-16 07:14:30.529269 | orchestrator | 2026-04-16 07:14:30.529274 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529281 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.068) 0:00:48.713 ******** 2026-04-16 07:14:30.529288 | orchestrator | 2026-04-16 07:14:30.529294 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529304 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.070) 0:00:48.784 ******** 2026-04-16 07:14:30.529310 | orchestrator | 2026-04-16 07:14:30.529317 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:14:30.529323 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.078) 0:00:48.862 ******** 2026-04-16 07:14:30.529330 | orchestrator | 2026-04-16 07:14:30.529341 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-16 07:14:32.786094 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 07:14:32.786211 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 07:14:32.786240 | orchestrator | Thursday 16 April 2026 07:14:30 +0000 (0:00:00.107) 0:00:48.970 ******** 2026-04-16 07:14:32.786256 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_r0nml4gw/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_r0nml4gw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_r0nml4gw/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:32.786294 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-04-16 07:14:32.786306 | orchestrator | (): 'b7fa57d7-4bab-da77-4199-00000000000f' 2026-04-16 07:14:32.786359 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_dxs6md8f/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_dxs6md8f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_dxs6md8f/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:32.786375 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_s1au55t2/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_s1au55t2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_s1au55t2/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:32.786412 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_bxhrz9no/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_bxhrz9no/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_bxhrz9no/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:34.285234 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_5z3mdagn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_5z3mdagn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_5z3mdagn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:34.285400 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_wpwfzzzx/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_wpwfzzzx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_wpwfzzzx/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:34.285444 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_qv6zughc/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_qv6zughc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_qv6zughc/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.9.20260328&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.9.20260328 not found\")\\n'"} 2026-04-16 07:14:34.285459 | orchestrator | 2026-04-16 07:14:34.285472 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:14:34.285515 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285529 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285541 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285584 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285599 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285610 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285621 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-04-16 07:14:34.285632 | orchestrator | 2026-04-16 07:14:34.285643 | orchestrator | 2026-04-16 07:14:34.285663 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:14:34.653954 | orchestrator | 2026-04-16 07:14:34 | INFO  | Prepare task for execution of common. 2026-04-16 07:14:34.658459 | orchestrator | 2026-04-16 07:14:34 | INFO  | Task 16a987c6-cb9d-4242-b0bb-655293b12bad (common) was prepared for execution. 2026-04-16 07:14:34.658504 | orchestrator | 2026-04-16 07:14:34 | INFO  | It takes a moment until task 16a987c6-cb9d-4242-b0bb-655293b12bad (common) has been started and output is visible here. 2026-04-16 07:14:51.917511 | orchestrator | Thursday 16 April 2026 07:14:34 +0000 (0:00:03.752) 0:00:52.722 ******** 2026-04-16 07:14:51.917660 | orchestrator | =============================================================================== 2026-04-16 07:14:51.917679 | orchestrator | common : Copying over config.json files for services -------------------- 4.07s 2026-04-16 07:14:51.917691 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.77s 2026-04-16 07:14:51.917702 | orchestrator | common : Restart fluentd container -------------------------------------- 3.75s 2026-04-16 07:14:51.917713 | orchestrator | service-check-containers : common | Check containers -------------------- 3.26s 2026-04-16 07:14:51.917724 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.77s 2026-04-16 07:14:51.917735 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.77s 2026-04-16 07:14:51.917747 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.44s 2026-04-16 07:14:51.917760 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.40s 2026-04-16 07:14:51.917772 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.29s 2026-04-16 07:14:51.917784 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.24s 2026-04-16 07:14:51.917797 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.20s 2026-04-16 07:14:51.917808 | orchestrator | common : include_tasks -------------------------------------------------- 2.05s 2026-04-16 07:14:51.917821 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.00s 2026-04-16 07:14:51.917833 | orchestrator | common : Copying over cron logrotate config file ------------------------ 1.98s 2026-04-16 07:14:51.917844 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.81s 2026-04-16 07:14:51.917856 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.79s 2026-04-16 07:14:51.917867 | orchestrator | common : include_tasks -------------------------------------------------- 1.68s 2026-04-16 07:14:51.917879 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.18s 2026-04-16 07:14:51.917890 | orchestrator | common : Find custom fluentd filter config files ------------------------ 0.98s 2026-04-16 07:14:51.917903 | orchestrator | service-check-containers : common | Notify handlers to restart containers --- 0.87s 2026-04-16 07:14:51.917916 | orchestrator | 2026-04-16 07:14:51.917930 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-04-16 07:14:51.917942 | orchestrator | 2026-04-16 07:14:51.917972 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-16 07:14:51.917984 | orchestrator | Thursday 16 April 2026 07:14:40 +0000 (0:00:02.297) 0:00:02.297 ******** 2026-04-16 07:14:51.917995 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:14:51.918009 | orchestrator | 2026-04-16 07:14:51.918084 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-04-16 07:14:51.918098 | orchestrator | Thursday 16 April 2026 07:14:43 +0000 (0:00:03.390) 0:00:05.688 ******** 2026-04-16 07:14:51.918110 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918123 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918134 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918144 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918182 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918195 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918205 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918216 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918226 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918237 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918249 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918260 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-04-16 07:14:51.918272 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918283 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918296 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918307 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918319 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918327 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-04-16 07:14:51.918334 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918342 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918370 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-04-16 07:14:51.918379 | orchestrator | 2026-04-16 07:14:51.918387 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-04-16 07:14:51.918394 | orchestrator | Thursday 16 April 2026 07:14:47 +0000 (0:00:03.892) 0:00:09.580 ******** 2026-04-16 07:14:51.918402 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:14:51.918410 | orchestrator | 2026-04-16 07:14:51.918417 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-04-16 07:14:51.918424 | orchestrator | Thursday 16 April 2026 07:14:50 +0000 (0:00:02.778) 0:00:12.359 ******** 2026-04-16 07:14:51.918433 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:51.918444 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:51.918459 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:51.918475 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:51.918483 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:51.918490 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:51.918503 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:55.343656 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343738 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:14:55.343759 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343789 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343798 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343830 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343839 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343843 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343852 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343856 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343860 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343868 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:14:55.343872 | orchestrator | 2026-04-16 07:14:55.343877 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-04-16 07:14:55.343882 | orchestrator | Thursday 16 April 2026 07:14:54 +0000 (0:00:04.855) 0:00:17.215 ******** 2026-04-16 07:14:55.343887 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:55.343895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:56.934652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.934836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.934870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.934894 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:14:56.934915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:56.934935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.934954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.934974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.935021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:56.935055 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:14:56.935074 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:14:56.935094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.935123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:56.935143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.935162 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:14:56.935180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.935201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:56.935220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:56.935239 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:14:56.935270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.471746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:59.471855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.471872 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:14:59.471884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.471894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.471902 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:14:59.471911 | orchestrator | 2026-04-16 07:14:59.471920 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-04-16 07:14:59.471929 | orchestrator | Thursday 16 April 2026 07:14:58 +0000 (0:00:03.657) 0:00:20.873 ******** 2026-04-16 07:14:59.471937 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:59.471947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:59.471973 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.471996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.472009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:59.472018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.472027 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.472035 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:14:59.472044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:14:59.472052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.472066 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:14:59.472074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:14:59.472089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:12.344737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.344855 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:15:12.344892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.344906 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:15:12.344919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.344934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:12.344946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.344981 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:15:12.344993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.345005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.345016 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:15:12.345052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:12.345082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.345102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:12.345122 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:15:12.345142 | orchestrator | 2026-04-16 07:15:12.345162 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-04-16 07:15:12.345185 | orchestrator | Thursday 16 April 2026 07:15:01 +0000 (0:00:03.018) 0:00:23.891 ******** 2026-04-16 07:15:12.345205 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:15:12.345226 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:15:12.345245 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:15:12.345267 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:15:12.345287 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:15:12.345307 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:15:12.345328 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:15:12.345347 | orchestrator | 2026-04-16 07:15:12.345369 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-04-16 07:15:12.345390 | orchestrator | Thursday 16 April 2026 07:15:03 +0000 (0:00:02.273) 0:00:26.165 ******** 2026-04-16 07:15:12.345426 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:15:12.345444 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:15:12.345464 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:15:12.345482 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:15:12.345501 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:15:12.345521 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:15:12.345542 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:15:12.345590 | orchestrator | 2026-04-16 07:15:12.345610 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-04-16 07:15:12.345628 | orchestrator | Thursday 16 April 2026 07:15:05 +0000 (0:00:01.891) 0:00:28.057 ******** 2026-04-16 07:15:12.345647 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:15:12.345666 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:15:12.345684 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:15:12.345703 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:15:12.345715 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:15:12.345725 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:15:12.345736 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:15:12.345746 | orchestrator | 2026-04-16 07:15:12.345757 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-04-16 07:15:12.345768 | orchestrator | Thursday 16 April 2026 07:15:08 +0000 (0:00:02.392) 0:00:30.449 ******** 2026-04-16 07:15:12.345778 | orchestrator | ok: [testbed-manager] 2026-04-16 07:15:12.345790 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:15:12.345801 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:15:12.345812 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:15:12.345823 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:15:12.345834 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:15:12.345844 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:15:12.345855 | orchestrator | 2026-04-16 07:15:12.345866 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-04-16 07:15:12.345877 | orchestrator | Thursday 16 April 2026 07:15:11 +0000 (0:00:02.999) 0:00:33.448 ******** 2026-04-16 07:15:12.345889 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:12.345916 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:14.115822 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:14.116148 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:14.116176 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:14.116181 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116186 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:14.116190 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:14.116196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116215 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116220 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116238 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116244 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116251 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116257 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:14.116273 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:34.727618 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:34.727730 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:34.727740 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:34.727746 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:34.727753 | orchestrator | 2026-04-16 07:15:34.727763 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-04-16 07:15:34.727775 | orchestrator | Thursday 16 April 2026 07:15:15 +0000 (0:00:04.725) 0:00:38.174 ******** 2026-04-16 07:15:34.727785 | orchestrator | [WARNING]: Skipped 2026-04-16 07:15:34.727795 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-04-16 07:15:34.727806 | orchestrator | to this access issue: 2026-04-16 07:15:34.727816 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-04-16 07:15:34.727825 | orchestrator | directory 2026-04-16 07:15:34.727834 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:15:34.727844 | orchestrator | 2026-04-16 07:15:34.727853 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-04-16 07:15:34.727862 | orchestrator | Thursday 16 April 2026 07:15:18 +0000 (0:00:02.400) 0:00:40.574 ******** 2026-04-16 07:15:34.727871 | orchestrator | [WARNING]: Skipped 2026-04-16 07:15:34.727881 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-04-16 07:15:34.727890 | orchestrator | to this access issue: 2026-04-16 07:15:34.727900 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-04-16 07:15:34.727909 | orchestrator | directory 2026-04-16 07:15:34.727919 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:15:34.727929 | orchestrator | 2026-04-16 07:15:34.727939 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-04-16 07:15:34.727949 | orchestrator | Thursday 16 April 2026 07:15:20 +0000 (0:00:01.837) 0:00:42.412 ******** 2026-04-16 07:15:34.727958 | orchestrator | [WARNING]: Skipped 2026-04-16 07:15:34.727968 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-04-16 07:15:34.727976 | orchestrator | to this access issue: 2026-04-16 07:15:34.727987 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-04-16 07:15:34.727997 | orchestrator | directory 2026-04-16 07:15:34.728006 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:15:34.728016 | orchestrator | 2026-04-16 07:15:34.728026 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-04-16 07:15:34.728036 | orchestrator | Thursday 16 April 2026 07:15:22 +0000 (0:00:01.869) 0:00:44.282 ******** 2026-04-16 07:15:34.728056 | orchestrator | [WARNING]: Skipped 2026-04-16 07:15:34.728062 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-04-16 07:15:34.728068 | orchestrator | to this access issue: 2026-04-16 07:15:34.728074 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-04-16 07:15:34.728080 | orchestrator | directory 2026-04-16 07:15:34.728085 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 07:15:34.728091 | orchestrator | 2026-04-16 07:15:34.728108 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-04-16 07:15:34.728114 | orchestrator | Thursday 16 April 2026 07:15:23 +0000 (0:00:01.819) 0:00:46.101 ******** 2026-04-16 07:15:34.728120 | orchestrator | ok: [testbed-manager] 2026-04-16 07:15:34.728126 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:15:34.728133 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:15:34.728140 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:15:34.728146 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:15:34.728152 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:15:34.728159 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:15:34.728166 | orchestrator | 2026-04-16 07:15:34.728185 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-04-16 07:15:34.728192 | orchestrator | Thursday 16 April 2026 07:15:27 +0000 (0:00:03.993) 0:00:50.095 ******** 2026-04-16 07:15:34.728199 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728207 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728214 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728221 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728227 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728233 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728239 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-04-16 07:15:34.728245 | orchestrator | 2026-04-16 07:15:34.728251 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-04-16 07:15:34.728256 | orchestrator | Thursday 16 April 2026 07:15:30 +0000 (0:00:03.120) 0:00:53.216 ******** 2026-04-16 07:15:34.728262 | orchestrator | ok: [testbed-manager] 2026-04-16 07:15:34.728268 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:15:34.728274 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:15:34.728280 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:15:34.728286 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:15:34.728291 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:15:34.728297 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:15:34.728303 | orchestrator | 2026-04-16 07:15:34.728308 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-04-16 07:15:34.728314 | orchestrator | Thursday 16 April 2026 07:15:34 +0000 (0:00:03.053) 0:00:56.269 ******** 2026-04-16 07:15:34.728322 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:34.728332 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:34.728343 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:34.728353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:34.728371 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:35.790855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:35.790960 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:35.790978 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:35.790992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:35.791030 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:35.791043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:35.791071 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:35.791103 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:35.791116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:35.791128 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:35.791140 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:35.791159 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:35.791171 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:35.791183 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:35.791200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:35.791220 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:45.042179 | orchestrator | 2026-04-16 07:15:45.042296 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-04-16 07:15:45.042311 | orchestrator | Thursday 16 April 2026 07:15:36 +0000 (0:00:02.837) 0:00:59.106 ******** 2026-04-16 07:15:45.042322 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042333 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042343 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042353 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042363 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042372 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042382 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-04-16 07:15:45.042392 | orchestrator | 2026-04-16 07:15:45.042401 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-04-16 07:15:45.042433 | orchestrator | Thursday 16 April 2026 07:15:39 +0000 (0:00:02.767) 0:01:01.874 ******** 2026-04-16 07:15:45.042444 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042454 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042463 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042473 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042483 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042493 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042502 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-04-16 07:15:45.042512 | orchestrator | 2026-04-16 07:15:45.042522 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-04-16 07:15:45.042531 | orchestrator | Thursday 16 April 2026 07:15:43 +0000 (0:00:03.408) 0:01:05.282 ******** 2026-04-16 07:15:45.042545 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042709 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:45.042728 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-04-16 07:15:45.042837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:45.042873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:45.042887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:45.042910 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637253 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637265 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:15:49.637377 | orchestrator | 2026-04-16 07:15:49.637397 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-04-16 07:15:49.637416 | orchestrator | Thursday 16 April 2026 07:15:47 +0000 (0:00:04.188) 0:01:09.471 ******** 2026-04-16 07:15:49.637435 | orchestrator | changed: [testbed-manager] => { 2026-04-16 07:15:49.637453 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637469 | orchestrator | } 2026-04-16 07:15:49.637488 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:15:49.637506 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637524 | orchestrator | } 2026-04-16 07:15:49.637541 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:15:49.637556 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637624 | orchestrator | } 2026-04-16 07:15:49.637638 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:15:49.637648 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637657 | orchestrator | } 2026-04-16 07:15:49.637668 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 07:15:49.637679 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637690 | orchestrator | } 2026-04-16 07:15:49.637701 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 07:15:49.637712 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637722 | orchestrator | } 2026-04-16 07:15:49.637733 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 07:15:49.637745 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:15:49.637756 | orchestrator | } 2026-04-16 07:15:49.637767 | orchestrator | 2026-04-16 07:15:49.637779 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:15:49.637790 | orchestrator | Thursday 16 April 2026 07:15:49 +0000 (0:00:01.942) 0:01:11.413 ******** 2026-04-16 07:15:49.637803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:49.637816 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:49.637837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:49.637859 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:15:49.637871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:49.637892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.389422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390457 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:15:50.390506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:50.390520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390540 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:15:50.390549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:50.390625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390646 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:15:50.390677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:50.390687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390696 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:50.390715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:15:50.390744 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:15:50.390753 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:15:50.390762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-04-16 07:15:50.390777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:17:30.609049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:17:30.609162 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:17:30.609178 | orchestrator | 2026-04-16 07:17:30.609188 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609198 | orchestrator | Thursday 16 April 2026 07:15:52 +0000 (0:00:03.165) 0:01:14.579 ******** 2026-04-16 07:17:30.609208 | orchestrator | 2026-04-16 07:17:30.609216 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609224 | orchestrator | Thursday 16 April 2026 07:15:52 +0000 (0:00:00.429) 0:01:15.008 ******** 2026-04-16 07:17:30.609232 | orchestrator | 2026-04-16 07:17:30.609240 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609248 | orchestrator | Thursday 16 April 2026 07:15:53 +0000 (0:00:00.441) 0:01:15.450 ******** 2026-04-16 07:17:30.609257 | orchestrator | 2026-04-16 07:17:30.609265 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609272 | orchestrator | Thursday 16 April 2026 07:15:53 +0000 (0:00:00.414) 0:01:15.865 ******** 2026-04-16 07:17:30.609281 | orchestrator | 2026-04-16 07:17:30.609289 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609298 | orchestrator | Thursday 16 April 2026 07:15:54 +0000 (0:00:00.471) 0:01:16.336 ******** 2026-04-16 07:17:30.609330 | orchestrator | 2026-04-16 07:17:30.609338 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609343 | orchestrator | Thursday 16 April 2026 07:15:54 +0000 (0:00:00.453) 0:01:16.789 ******** 2026-04-16 07:17:30.609347 | orchestrator | 2026-04-16 07:17:30.609352 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-04-16 07:17:30.609357 | orchestrator | Thursday 16 April 2026 07:15:54 +0000 (0:00:00.438) 0:01:17.227 ******** 2026-04-16 07:17:30.609362 | orchestrator | 2026-04-16 07:17:30.609366 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-04-16 07:17:30.609371 | orchestrator | Thursday 16 April 2026 07:15:55 +0000 (0:00:00.833) 0:01:18.061 ******** 2026-04-16 07:17:30.609376 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:17:30.609381 | orchestrator | changed: [testbed-manager] 2026-04-16 07:17:30.609385 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:17:30.609390 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:17:30.609395 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:17:30.609399 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:17:30.609404 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:17:30.609408 | orchestrator | 2026-04-16 07:17:30.609413 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-04-16 07:17:30.609417 | orchestrator | Thursday 16 April 2026 07:16:37 +0000 (0:00:42.061) 0:02:00.123 ******** 2026-04-16 07:17:30.609422 | orchestrator | changed: [testbed-manager] 2026-04-16 07:17:30.609427 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:17:30.609432 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:17:30.609437 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:17:30.609441 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:17:30.609446 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:17:30.609450 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:17:30.609455 | orchestrator | 2026-04-16 07:17:30.609459 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-04-16 07:17:30.609464 | orchestrator | Thursday 16 April 2026 07:17:14 +0000 (0:00:37.023) 0:02:37.146 ******** 2026-04-16 07:17:30.609481 | orchestrator | ok: [testbed-manager] 2026-04-16 07:17:30.609487 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:30.609492 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:30.609496 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:30.609501 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:17:30.609505 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:17:30.609510 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:17:30.609515 | orchestrator | 2026-04-16 07:17:30.609519 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-04-16 07:17:30.609524 | orchestrator | Thursday 16 April 2026 07:17:17 +0000 (0:00:02.899) 0:02:40.046 ******** 2026-04-16 07:17:30.609528 | orchestrator | changed: [testbed-manager] 2026-04-16 07:17:30.609533 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:17:30.609537 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:17:30.609542 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:17:30.609546 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:17:30.609551 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:17:30.609555 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:17:30.609560 | orchestrator | 2026-04-16 07:17:30.609565 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:17:30.609570 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609576 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609599 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609605 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609628 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609634 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609640 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:17:30.609645 | orchestrator | 2026-04-16 07:17:30.609650 | orchestrator | 2026-04-16 07:17:30.609656 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:17:30.609661 | orchestrator | Thursday 16 April 2026 07:17:30 +0000 (0:00:12.426) 0:02:52.472 ******** 2026-04-16 07:17:30.609666 | orchestrator | =============================================================================== 2026-04-16 07:17:30.609671 | orchestrator | common : Restart fluentd container ------------------------------------- 42.06s 2026-04-16 07:17:30.609677 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 37.02s 2026-04-16 07:17:30.609682 | orchestrator | common : Restart cron container ---------------------------------------- 12.43s 2026-04-16 07:17:30.609688 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.86s 2026-04-16 07:17:30.609693 | orchestrator | common : Copying over config.json files for services -------------------- 4.73s 2026-04-16 07:17:30.609698 | orchestrator | service-check-containers : common | Check containers -------------------- 4.19s 2026-04-16 07:17:30.609703 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.99s 2026-04-16 07:17:30.609708 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.89s 2026-04-16 07:17:30.609713 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.66s 2026-04-16 07:17:30.609720 | orchestrator | common : Flush handlers ------------------------------------------------- 3.48s 2026-04-16 07:17:30.609725 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.41s 2026-04-16 07:17:30.609730 | orchestrator | common : include_tasks -------------------------------------------------- 3.39s 2026-04-16 07:17:30.609736 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.17s 2026-04-16 07:17:30.609741 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.12s 2026-04-16 07:17:30.609746 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.05s 2026-04-16 07:17:30.609752 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.02s 2026-04-16 07:17:30.609757 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.00s 2026-04-16 07:17:30.609762 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.90s 2026-04-16 07:17:30.609768 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.84s 2026-04-16 07:17:30.609773 | orchestrator | common : include_tasks -------------------------------------------------- 2.78s 2026-04-16 07:17:30.800704 | orchestrator | + osism apply -a upgrade loadbalancer 2026-04-16 07:17:32.064320 | orchestrator | 2026-04-16 07:17:32 | INFO  | Prepare task for execution of loadbalancer. 2026-04-16 07:17:32.126567 | orchestrator | 2026-04-16 07:17:32 | INFO  | Task e6ff0a14-4ff3-4c8b-a332-0479943fa52f (loadbalancer) was prepared for execution. 2026-04-16 07:17:32.126764 | orchestrator | 2026-04-16 07:17:32 | INFO  | It takes a moment until task e6ff0a14-4ff3-4c8b-a332-0479943fa52f (loadbalancer) has been started and output is visible here. 2026-04-16 07:17:51.872621 | orchestrator | 2026-04-16 07:17:51.872736 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:17:51.872753 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 07:17:51.872793 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 07:17:51.872816 | orchestrator | 2026-04-16 07:17:51.872828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:17:51.872839 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 07:17:51.872849 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 07:17:51.872871 | orchestrator | Thursday 16 April 2026 07:17:36 +0000 (0:00:01.277) 0:00:01.277 ******** 2026-04-16 07:17:51.872881 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:51.872893 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:51.872904 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:51.872914 | orchestrator | 2026-04-16 07:17:51.872925 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:17:51.872936 | orchestrator | Thursday 16 April 2026 07:17:37 +0000 (0:00:00.777) 0:00:02.054 ******** 2026-04-16 07:17:51.872947 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-04-16 07:17:51.872957 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-04-16 07:17:51.872968 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-04-16 07:17:51.872979 | orchestrator | 2026-04-16 07:17:51.872990 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-04-16 07:17:51.873001 | orchestrator | 2026-04-16 07:17:51.873011 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-16 07:17:51.873022 | orchestrator | Thursday 16 April 2026 07:17:38 +0000 (0:00:00.918) 0:00:02.973 ******** 2026-04-16 07:17:51.873033 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:17:51.873044 | orchestrator | 2026-04-16 07:17:51.873054 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-04-16 07:17:51.873065 | orchestrator | Thursday 16 April 2026 07:17:39 +0000 (0:00:01.095) 0:00:04.069 ******** 2026-04-16 07:17:51.873075 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:51.873086 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:51.873097 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:51.873107 | orchestrator | 2026-04-16 07:17:51.873118 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-04-16 07:17:51.873130 | orchestrator | Thursday 16 April 2026 07:17:40 +0000 (0:00:01.417) 0:00:05.487 ******** 2026-04-16 07:17:51.873143 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:51.873155 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:51.873167 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:51.873179 | orchestrator | 2026-04-16 07:17:51.873192 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-04-16 07:17:51.873205 | orchestrator | Thursday 16 April 2026 07:17:42 +0000 (0:00:01.136) 0:00:06.623 ******** 2026-04-16 07:17:51.873217 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:51.873229 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:51.873241 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:51.873253 | orchestrator | 2026-04-16 07:17:51.873265 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-16 07:17:51.873277 | orchestrator | Thursday 16 April 2026 07:17:42 +0000 (0:00:00.746) 0:00:07.370 ******** 2026-04-16 07:17:51.873289 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:17:51.873302 | orchestrator | 2026-04-16 07:17:51.873313 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-04-16 07:17:51.873325 | orchestrator | Thursday 16 April 2026 07:17:43 +0000 (0:00:00.855) 0:00:08.226 ******** 2026-04-16 07:17:51.873337 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:51.873349 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:51.873368 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:51.873381 | orchestrator | 2026-04-16 07:17:51.873394 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-04-16 07:17:51.873406 | orchestrator | Thursday 16 April 2026 07:17:45 +0000 (0:00:01.637) 0:00:09.864 ******** 2026-04-16 07:17:51.873418 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-16 07:17:51.873431 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-16 07:17:51.873443 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-04-16 07:17:51.873456 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-16 07:17:51.873468 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-16 07:17:51.873480 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-04-16 07:17:51.873492 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-16 07:17:51.873504 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-16 07:17:51.873514 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-04-16 07:17:51.873525 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-16 07:17:51.873536 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-16 07:17:51.873736 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-04-16 07:17:51.873758 | orchestrator | 2026-04-16 07:17:51.873769 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 07:17:51.873779 | orchestrator | Thursday 16 April 2026 07:17:47 +0000 (0:00:02.489) 0:00:12.353 ******** 2026-04-16 07:17:51.873790 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-16 07:17:51.873802 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-16 07:17:51.873813 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-16 07:17:51.873823 | orchestrator | 2026-04-16 07:17:51.873834 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 07:17:51.873845 | orchestrator | Thursday 16 April 2026 07:17:48 +0000 (0:00:00.691) 0:00:13.045 ******** 2026-04-16 07:17:51.873855 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-04-16 07:17:51.873866 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-04-16 07:17:51.873877 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-04-16 07:17:51.873888 | orchestrator | 2026-04-16 07:17:51.873898 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 07:17:51.873909 | orchestrator | Thursday 16 April 2026 07:17:49 +0000 (0:00:01.122) 0:00:14.167 ******** 2026-04-16 07:17:51.873920 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-04-16 07:17:51.873931 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:17:51.873942 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-04-16 07:17:51.873952 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:17:51.873963 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-04-16 07:17:51.873973 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:17:51.873984 | orchestrator | 2026-04-16 07:17:51.873997 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-04-16 07:17:51.874015 | orchestrator | Thursday 16 April 2026 07:17:50 +0000 (0:00:01.169) 0:00:15.337 ******** 2026-04-16 07:17:51.874122 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 07:17:51.874158 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 07:17:51.874216 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 07:17:51.874235 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:17:51.874269 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:17:58.469039 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:17:58.469201 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:17:58.469242 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:17:58.469255 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:17:58.469268 | orchestrator | 2026-04-16 07:17:58.469281 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-04-16 07:17:58.469295 | orchestrator | Thursday 16 April 2026 07:17:52 +0000 (0:00:01.649) 0:00:16.986 ******** 2026-04-16 07:17:58.469306 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:58.469318 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:58.469329 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:58.469340 | orchestrator | 2026-04-16 07:17:58.469351 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-04-16 07:17:58.469362 | orchestrator | Thursday 16 April 2026 07:17:53 +0000 (0:00:01.396) 0:00:18.383 ******** 2026-04-16 07:17:58.469373 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-04-16 07:17:58.469384 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-04-16 07:17:58.469395 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-04-16 07:17:58.469406 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-04-16 07:17:58.469416 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-04-16 07:17:58.469427 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-04-16 07:17:58.469438 | orchestrator | 2026-04-16 07:17:58.469448 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-04-16 07:17:58.469459 | orchestrator | Thursday 16 April 2026 07:17:55 +0000 (0:00:01.548) 0:00:19.932 ******** 2026-04-16 07:17:58.469470 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:58.469481 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:58.469492 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:58.469503 | orchestrator | 2026-04-16 07:17:58.469513 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-04-16 07:17:58.469524 | orchestrator | Thursday 16 April 2026 07:17:56 +0000 (0:00:00.961) 0:00:20.893 ******** 2026-04-16 07:17:58.469535 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:17:58.469546 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:17:58.469556 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:17:58.469569 | orchestrator | 2026-04-16 07:17:58.469582 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-04-16 07:17:58.469625 | orchestrator | Thursday 16 April 2026 07:17:57 +0000 (0:00:01.368) 0:00:22.261 ******** 2026-04-16 07:17:58.469665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 07:17:58.469689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:17:58.469703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:17:58.469717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 07:17:58.469731 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:17:58.469744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 07:17:58.469758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:17:58.469775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:17:58.469804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 07:18:01.312312 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:01.312430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 07:18:01.312456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:01.312475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:01.312494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 07:18:01.312512 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:01.312529 | orchestrator | 2026-04-16 07:18:01.312548 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-04-16 07:18:01.312567 | orchestrator | Thursday 16 April 2026 07:17:58 +0000 (0:00:01.007) 0:00:23.269 ******** 2026-04-16 07:18:01.312665 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:01.312738 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:01.312757 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:01.312775 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:01.312791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:01.312807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:01.312828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 07:18:01.312850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:01.312872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 07:18:07.003524 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:07.003726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20260328', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733', '__omit_place_holder__ee81a48db8ac1c1870185ec9e440abc546059733'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-04-16 07:18:07.003737 | orchestrator | 2026-04-16 07:18:07.003749 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-04-16 07:18:07.003761 | orchestrator | Thursday 16 April 2026 07:18:01 +0000 (0:00:02.812) 0:00:26.082 ******** 2026-04-16 07:18:07.003788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:07.003891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:07.003914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:07.003925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:07.003935 | orchestrator | 2026-04-16 07:18:07.003945 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-04-16 07:18:07.003955 | orchestrator | Thursday 16 April 2026 07:18:05 +0000 (0:00:03.535) 0:00:29.617 ******** 2026-04-16 07:18:07.003965 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-16 07:18:07.003976 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-16 07:18:07.003986 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-04-16 07:18:07.003996 | orchestrator | 2026-04-16 07:18:07.004005 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-04-16 07:18:07.004022 | orchestrator | Thursday 16 April 2026 07:18:06 +0000 (0:00:01.953) 0:00:31.571 ******** 2026-04-16 07:18:23.783518 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-16 07:18:23.783664 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-16 07:18:23.783677 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-04-16 07:18:23.783684 | orchestrator | 2026-04-16 07:18:23.783692 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-04-16 07:18:23.783700 | orchestrator | Thursday 16 April 2026 07:18:10 +0000 (0:00:03.410) 0:00:34.982 ******** 2026-04-16 07:18:23.783707 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:23.783715 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:23.783721 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:23.783727 | orchestrator | 2026-04-16 07:18:23.783733 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-04-16 07:18:23.783740 | orchestrator | Thursday 16 April 2026 07:18:10 +0000 (0:00:00.576) 0:00:35.558 ******** 2026-04-16 07:18:23.783747 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-16 07:18:23.783754 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-16 07:18:23.783761 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-04-16 07:18:23.783767 | orchestrator | 2026-04-16 07:18:23.783774 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-04-16 07:18:23.783801 | orchestrator | Thursday 16 April 2026 07:18:13 +0000 (0:00:02.163) 0:00:37.721 ******** 2026-04-16 07:18:23.783808 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-16 07:18:23.783815 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-16 07:18:23.783821 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-04-16 07:18:23.783827 | orchestrator | 2026-04-16 07:18:23.783833 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-04-16 07:18:23.783839 | orchestrator | Thursday 16 April 2026 07:18:15 +0000 (0:00:02.272) 0:00:39.994 ******** 2026-04-16 07:18:23.783846 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:18:23.783853 | orchestrator | 2026-04-16 07:18:23.783859 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-04-16 07:18:23.783865 | orchestrator | Thursday 16 April 2026 07:18:16 +0000 (0:00:00.992) 0:00:40.987 ******** 2026-04-16 07:18:23.783872 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-04-16 07:18:23.783880 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-04-16 07:18:23.783886 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-04-16 07:18:23.783892 | orchestrator | 2026-04-16 07:18:23.783898 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-04-16 07:18:23.783904 | orchestrator | Thursday 16 April 2026 07:18:18 +0000 (0:00:01.625) 0:00:42.613 ******** 2026-04-16 07:18:23.783911 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-04-16 07:18:23.783917 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-04-16 07:18:23.783924 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-04-16 07:18:23.783930 | orchestrator | 2026-04-16 07:18:23.783956 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-04-16 07:18:23.783963 | orchestrator | Thursday 16 April 2026 07:18:19 +0000 (0:00:01.819) 0:00:44.433 ******** 2026-04-16 07:18:23.783969 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:23.783975 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:23.783981 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:23.783987 | orchestrator | 2026-04-16 07:18:23.783994 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-04-16 07:18:23.784000 | orchestrator | Thursday 16 April 2026 07:18:20 +0000 (0:00:00.296) 0:00:44.729 ******** 2026-04-16 07:18:23.784005 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:23.784011 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:23.784016 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:23.784022 | orchestrator | 2026-04-16 07:18:23.784028 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-16 07:18:23.784034 | orchestrator | Thursday 16 April 2026 07:18:20 +0000 (0:00:00.715) 0:00:45.445 ******** 2026-04-16 07:18:23.784043 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:23.784068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:23.784081 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:23.784089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:23.784096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:23.784107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:23.784114 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:23.784126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:25.287878 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:25.287987 | orchestrator | 2026-04-16 07:18:25.288005 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-16 07:18:25.288019 | orchestrator | Thursday 16 April 2026 07:18:23 +0000 (0:00:03.110) 0:00:48.555 ******** 2026-04-16 07:18:25.288033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 07:18:25.288047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:25.288059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:25.288071 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:25.288084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 07:18:25.288096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:25.288151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:25.288164 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:25.288176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 07:18:25.288188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:25.288200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:25.288211 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:25.288222 | orchestrator | 2026-04-16 07:18:25.288277 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-16 07:18:25.288290 | orchestrator | Thursday 16 April 2026 07:18:24 +0000 (0:00:00.957) 0:00:49.513 ******** 2026-04-16 07:18:25.288306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 07:18:25.288318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:25.288351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:33.300889 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:33.300988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 07:18:33.301003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:33.301010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:33.301015 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:33.301032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 07:18:33.301036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:33.301055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:33.301059 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:33.301063 | orchestrator | 2026-04-16 07:18:33.301068 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-04-16 07:18:33.301074 | orchestrator | Thursday 16 April 2026 07:18:25 +0000 (0:00:00.906) 0:00:50.419 ******** 2026-04-16 07:18:33.301078 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-16 07:18:33.301095 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-16 07:18:33.301099 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-04-16 07:18:33.301103 | orchestrator | 2026-04-16 07:18:33.301107 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-04-16 07:18:33.301111 | orchestrator | Thursday 16 April 2026 07:18:27 +0000 (0:00:01.684) 0:00:52.104 ******** 2026-04-16 07:18:33.301115 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-16 07:18:33.301118 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-16 07:18:33.301122 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-04-16 07:18:33.301126 | orchestrator | 2026-04-16 07:18:33.301130 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-04-16 07:18:33.301133 | orchestrator | Thursday 16 April 2026 07:18:30 +0000 (0:00:02.690) 0:00:54.795 ******** 2026-04-16 07:18:33.301137 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 07:18:33.301141 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 07:18:33.301145 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 07:18:33.301149 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 07:18:33.301153 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:33.301157 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 07:18:33.301160 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:33.301164 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 07:18:33.301168 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:33.301171 | orchestrator | 2026-04-16 07:18:33.301175 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-16 07:18:33.301179 | orchestrator | Thursday 16 April 2026 07:18:31 +0000 (0:00:01.237) 0:00:56.032 ******** 2026-04-16 07:18:33.301186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:33.301194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:33.301198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 07:18:33.301207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:35.513442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:35.513548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:18:35.513564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:35.513716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:35.513736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:18:35.513749 | orchestrator | 2026-04-16 07:18:35.513762 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-16 07:18:35.513774 | orchestrator | Thursday 16 April 2026 07:18:34 +0000 (0:00:02.940) 0:00:58.972 ******** 2026-04-16 07:18:35.513786 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:18:35.513798 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:18:35.513809 | orchestrator | } 2026-04-16 07:18:35.513820 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:18:35.513831 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:18:35.513842 | orchestrator | } 2026-04-16 07:18:35.513852 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:18:35.513863 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:18:35.513874 | orchestrator | } 2026-04-16 07:18:35.513885 | orchestrator | 2026-04-16 07:18:35.513896 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:18:35.513908 | orchestrator | Thursday 16 April 2026 07:18:34 +0000 (0:00:00.554) 0:00:59.527 ******** 2026-04-16 07:18:35.513938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 07:18:35.513952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:35.513963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:35.513985 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:35.513998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 07:18:35.514065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:35.514083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:35.514096 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:35.514109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 07:18:35.514133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:18:40.668237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:18:40.668382 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:40.668401 | orchestrator | 2026-04-16 07:18:40.668414 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-04-16 07:18:40.668426 | orchestrator | Thursday 16 April 2026 07:18:36 +0000 (0:00:01.082) 0:01:00.609 ******** 2026-04-16 07:18:40.668437 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:18:40.668448 | orchestrator | 2026-04-16 07:18:40.668459 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-04-16 07:18:40.668470 | orchestrator | Thursday 16 April 2026 07:18:37 +0000 (0:00:01.199) 0:01:01.809 ******** 2026-04-16 07:18:40.668500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:18:40.668514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 07:18:40.668527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:40.668540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 07:18:40.668571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:18:40.668593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 07:18:40.668730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:40.668744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 07:18:40.668756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:18:40.668779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 07:18:41.409149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:41.409279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 07:18:41.409309 | orchestrator | 2026-04-16 07:18:41.409332 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-04-16 07:18:41.409353 | orchestrator | Thursday 16 April 2026 07:18:40 +0000 (0:00:03.539) 0:01:05.349 ******** 2026-04-16 07:18:41.409398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:18:41.409422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 07:18:41.409443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:41.409487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 07:18:41.409538 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:41.409560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:18:41.409589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 07:18:41.409641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:41.409661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 07:18:41.409680 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:41.409700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:18:41.409746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 07:18:50.096427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:50.096560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 07:18:50.096577 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:50.096591 | orchestrator | 2026-04-16 07:18:50.096649 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-04-16 07:18:50.096664 | orchestrator | Thursday 16 April 2026 07:18:41 +0000 (0:00:00.882) 0:01:06.232 ******** 2026-04-16 07:18:50.096677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:18:50.096691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:18:50.096704 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:50.096715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:18:50.096727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:18:50.096738 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:50.096749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:18:50.096782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:18:50.096793 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:18:50.096804 | orchestrator | 2026-04-16 07:18:50.096816 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-04-16 07:18:50.096827 | orchestrator | Thursday 16 April 2026 07:18:42 +0000 (0:00:01.115) 0:01:07.348 ******** 2026-04-16 07:18:50.096838 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:18:50.096850 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:18:50.096860 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:18:50.096871 | orchestrator | 2026-04-16 07:18:50.096882 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-04-16 07:18:50.096893 | orchestrator | Thursday 16 April 2026 07:18:44 +0000 (0:00:01.241) 0:01:08.589 ******** 2026-04-16 07:18:50.096904 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:18:50.096915 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:18:50.096925 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:18:50.096936 | orchestrator | 2026-04-16 07:18:50.096949 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-04-16 07:18:50.096962 | orchestrator | Thursday 16 April 2026 07:18:45 +0000 (0:00:01.968) 0:01:10.557 ******** 2026-04-16 07:18:50.096974 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:18:50.096987 | orchestrator | 2026-04-16 07:18:50.096999 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-04-16 07:18:50.097012 | orchestrator | Thursday 16 April 2026 07:18:46 +0000 (0:00:00.877) 0:01:11.435 ******** 2026-04-16 07:18:50.097051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:18:50.097069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:50.097084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:18:50.097104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:18:50.097151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:50.097175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:18:51.143223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143264 | orchestrator | 2026-04-16 07:18:51.143275 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-04-16 07:18:51.143283 | orchestrator | Thursday 16 April 2026 07:18:50 +0000 (0:00:03.556) 0:01:14.991 ******** 2026-04-16 07:18:51.143294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:18:51.143320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143343 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:18:51.143353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:18:51.143367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:18:51.143408 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:18:51.143424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:01.310139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 07:19:01.310334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:01.310354 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:01.310368 | orchestrator | 2026-04-16 07:19:01.310381 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-04-16 07:19:01.310394 | orchestrator | Thursday 16 April 2026 07:18:51 +0000 (0:00:01.035) 0:01:16.026 ******** 2026-04-16 07:19:01.310406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:01.310423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:01.310436 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:01.310447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:01.310459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:01.310470 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:01.310482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:01.310493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:01.310505 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:01.310516 | orchestrator | 2026-04-16 07:19:01.310528 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-04-16 07:19:01.310539 | orchestrator | Thursday 16 April 2026 07:18:52 +0000 (0:00:00.866) 0:01:16.893 ******** 2026-04-16 07:19:01.310550 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:01.310562 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:01.310573 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:01.310583 | orchestrator | 2026-04-16 07:19:01.310625 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-04-16 07:19:01.310638 | orchestrator | Thursday 16 April 2026 07:18:53 +0000 (0:00:01.206) 0:01:18.100 ******** 2026-04-16 07:19:01.310648 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:01.310659 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:01.310670 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:01.310681 | orchestrator | 2026-04-16 07:19:01.310691 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-04-16 07:19:01.310702 | orchestrator | Thursday 16 April 2026 07:18:55 +0000 (0:00:02.189) 0:01:20.289 ******** 2026-04-16 07:19:01.310722 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:01.310733 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:01.310744 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:01.310754 | orchestrator | 2026-04-16 07:19:01.310765 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-04-16 07:19:01.310799 | orchestrator | Thursday 16 April 2026 07:18:56 +0000 (0:00:00.550) 0:01:20.839 ******** 2026-04-16 07:19:01.310811 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:19:01.310821 | orchestrator | 2026-04-16 07:19:01.310832 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-04-16 07:19:01.310843 | orchestrator | Thursday 16 April 2026 07:18:56 +0000 (0:00:00.692) 0:01:21.532 ******** 2026-04-16 07:19:01.310867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-16 07:19:01.310881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-16 07:19:01.310894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-04-16 07:19:01.310905 | orchestrator | 2026-04-16 07:19:01.310916 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-04-16 07:19:01.310929 | orchestrator | Thursday 16 April 2026 07:19:00 +0000 (0:00:03.090) 0:01:24.623 ******** 2026-04-16 07:19:01.310940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-16 07:19:01.310959 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:01.310985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-16 07:19:09.711633 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:09.711785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-04-16 07:19:09.711805 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:09.711817 | orchestrator | 2026-04-16 07:19:09.711830 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-04-16 07:19:09.711842 | orchestrator | Thursday 16 April 2026 07:19:01 +0000 (0:00:01.421) 0:01:26.045 ******** 2026-04-16 07:19:09.711855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-16 07:19:09.711870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-16 07:19:09.711883 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:09.711895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-16 07:19:09.711906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-16 07:19:09.711944 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:09.711957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-16 07:19:09.711968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-04-16 07:19:09.711980 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:09.711991 | orchestrator | 2026-04-16 07:19:09.712002 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-04-16 07:19:09.712030 | orchestrator | Thursday 16 April 2026 07:19:03 +0000 (0:00:01.805) 0:01:27.851 ******** 2026-04-16 07:19:09.712041 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:09.712053 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:09.712064 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:09.712075 | orchestrator | 2026-04-16 07:19:09.712086 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-04-16 07:19:09.712121 | orchestrator | Thursday 16 April 2026 07:19:04 +0000 (0:00:00.766) 0:01:28.618 ******** 2026-04-16 07:19:09.712135 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:09.712147 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:09.712159 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:09.712172 | orchestrator | 2026-04-16 07:19:09.712185 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-04-16 07:19:09.712197 | orchestrator | Thursday 16 April 2026 07:19:05 +0000 (0:00:01.336) 0:01:29.955 ******** 2026-04-16 07:19:09.712210 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:19:09.712223 | orchestrator | 2026-04-16 07:19:09.712235 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-04-16 07:19:09.712249 | orchestrator | Thursday 16 April 2026 07:19:06 +0000 (0:00:00.785) 0:01:30.740 ******** 2026-04-16 07:19:09.712265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:09.712283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:19:09.712308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 07:19:09.712322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 07:19:09.712352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:10.661754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.661849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.661879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.661890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:10.661911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.661937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.661945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.661960 | orchestrator | 2026-04-16 07:19:10.661970 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-04-16 07:19:10.661978 | orchestrator | Thursday 16 April 2026 07:19:10 +0000 (0:00:03.986) 0:01:34.726 ******** 2026-04-16 07:19:10.661988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:10.661996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.662008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 07:19:10.662069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.619692 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:11.619810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:11.619860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.619874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.619905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.619953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:11.619986 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:11.620004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.620023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.620041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 07:19:11.620060 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:11.620080 | orchestrator | 2026-04-16 07:19:11.620099 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-04-16 07:19:11.620120 | orchestrator | Thursday 16 April 2026 07:19:11 +0000 (0:00:00.857) 0:01:35.583 ******** 2026-04-16 07:19:11.620140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:11.620173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:11.620189 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:11.620203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:11.620216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:11.620229 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:11.620242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:11.620280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:20.734129 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:20.734240 | orchestrator | 2026-04-16 07:19:20.734258 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-04-16 07:19:20.734271 | orchestrator | Thursday 16 April 2026 07:19:11 +0000 (0:00:00.926) 0:01:36.510 ******** 2026-04-16 07:19:20.734283 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:20.734295 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:20.734306 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:20.734317 | orchestrator | 2026-04-16 07:19:20.734328 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-04-16 07:19:20.734339 | orchestrator | Thursday 16 April 2026 07:19:13 +0000 (0:00:01.510) 0:01:38.021 ******** 2026-04-16 07:19:20.734350 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:20.734361 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:20.734371 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:20.734382 | orchestrator | 2026-04-16 07:19:20.734393 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-04-16 07:19:20.734404 | orchestrator | Thursday 16 April 2026 07:19:15 +0000 (0:00:02.128) 0:01:40.149 ******** 2026-04-16 07:19:20.734432 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:20.734443 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:20.734465 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:20.734476 | orchestrator | 2026-04-16 07:19:20.734487 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-04-16 07:19:20.734498 | orchestrator | Thursday 16 April 2026 07:19:15 +0000 (0:00:00.338) 0:01:40.488 ******** 2026-04-16 07:19:20.734508 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:20.734519 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:20.734541 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:20.734552 | orchestrator | 2026-04-16 07:19:20.734591 | orchestrator | TASK [include_role : designate] ************************************************ 2026-04-16 07:19:20.734605 | orchestrator | Thursday 16 April 2026 07:19:16 +0000 (0:00:00.318) 0:01:40.806 ******** 2026-04-16 07:19:20.734618 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:19:20.734631 | orchestrator | 2026-04-16 07:19:20.734644 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-04-16 07:19:20.734656 | orchestrator | Thursday 16 April 2026 07:19:17 +0000 (0:00:01.008) 0:01:41.814 ******** 2026-04-16 07:19:20.734675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:20.734694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 07:19:20.734733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 07:19:20.734769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 07:19:20.734782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 07:19:20.734834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:20.734848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:20.734872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 07:19:20.734884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 07:19:20.734904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:21.455435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 07:19:21.455488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.455628 | orchestrator | 2026-04-16 07:19:21.455639 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-04-16 07:19:21.455647 | orchestrator | Thursday 16 April 2026 07:19:20 +0000 (0:00:03.753) 0:01:45.569 ******** 2026-04-16 07:19:21.455663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:21.621476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 07:19:21.621557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621672 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:21.621696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:21.621705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 07:19:21.621720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:21.621751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 07:19:32.484884 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:32.484999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:32.485061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 07:19:32.485077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 07:19:32.485089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 07:19:32.485101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 07:19:32.485132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:19:32.485152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-04-16 07:19:32.485164 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:32.485176 | orchestrator | 2026-04-16 07:19:32.485188 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-04-16 07:19:32.485201 | orchestrator | Thursday 16 April 2026 07:19:22 +0000 (0:00:01.068) 0:01:46.637 ******** 2026-04-16 07:19:32.485213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:32.485232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:32.485245 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:32.485257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:32.485268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:32.485280 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:32.485291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:32.485302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:32.485313 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:32.485324 | orchestrator | 2026-04-16 07:19:32.485335 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-04-16 07:19:32.485346 | orchestrator | Thursday 16 April 2026 07:19:23 +0000 (0:00:01.278) 0:01:47.915 ******** 2026-04-16 07:19:32.485357 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:32.485369 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:32.485379 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:32.485390 | orchestrator | 2026-04-16 07:19:32.485403 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-04-16 07:19:32.485416 | orchestrator | Thursday 16 April 2026 07:19:24 +0000 (0:00:01.229) 0:01:49.145 ******** 2026-04-16 07:19:32.485429 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:32.485441 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:32.485454 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:32.485466 | orchestrator | 2026-04-16 07:19:32.485479 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-04-16 07:19:32.485491 | orchestrator | Thursday 16 April 2026 07:19:26 +0000 (0:00:02.053) 0:01:51.198 ******** 2026-04-16 07:19:32.485511 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:32.485523 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:32.485534 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:32.485575 | orchestrator | 2026-04-16 07:19:32.485589 | orchestrator | TASK [include_role : glance] *************************************************** 2026-04-16 07:19:32.485600 | orchestrator | Thursday 16 April 2026 07:19:27 +0000 (0:00:00.539) 0:01:51.738 ******** 2026-04-16 07:19:32.485610 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:19:32.485621 | orchestrator | 2026-04-16 07:19:32.485632 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-04-16 07:19:32.485642 | orchestrator | Thursday 16 April 2026 07:19:28 +0000 (0:00:00.842) 0:01:52.580 ******** 2026-04-16 07:19:32.485716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 07:19:32.560016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 07:19:32.560143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 07:19:32.560200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 07:19:32.560232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 07:19:32.560269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 07:19:35.925985 | orchestrator | 2026-04-16 07:19:35.926151 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-04-16 07:19:35.926196 | orchestrator | Thursday 16 April 2026 07:19:32 +0000 (0:00:04.636) 0:01:57.217 ******** 2026-04-16 07:19:35.926216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 07:19:35.926247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 07:19:35.926261 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:35.926296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 07:19:35.926324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 07:19:35.926337 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:35.926359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 07:19:47.561370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.1.1.20260328', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-04-16 07:19:47.561505 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:47.561564 | orchestrator | 2026-04-16 07:19:47.561584 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-04-16 07:19:47.561604 | orchestrator | Thursday 16 April 2026 07:19:36 +0000 (0:00:03.389) 0:02:00.607 ******** 2026-04-16 07:19:47.561623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 07:19:47.561668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 07:19:47.561687 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:47.561705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 07:19:47.561748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 07:19:47.561767 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:47.561785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 07:19:47.561803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-04-16 07:19:47.561819 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:47.561836 | orchestrator | 2026-04-16 07:19:47.561852 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-04-16 07:19:47.561867 | orchestrator | Thursday 16 April 2026 07:19:39 +0000 (0:00:03.611) 0:02:04.219 ******** 2026-04-16 07:19:47.561883 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:47.561900 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:47.561918 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:47.561936 | orchestrator | 2026-04-16 07:19:47.561952 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-04-16 07:19:47.561969 | orchestrator | Thursday 16 April 2026 07:19:41 +0000 (0:00:01.514) 0:02:05.734 ******** 2026-04-16 07:19:47.561986 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:47.562004 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:47.562113 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:47.562154 | orchestrator | 2026-04-16 07:19:47.562165 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-04-16 07:19:47.562175 | orchestrator | Thursday 16 April 2026 07:19:43 +0000 (0:00:02.032) 0:02:07.766 ******** 2026-04-16 07:19:47.562185 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:47.562194 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:47.562204 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:47.562214 | orchestrator | 2026-04-16 07:19:47.562223 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-04-16 07:19:47.562233 | orchestrator | Thursday 16 April 2026 07:19:43 +0000 (0:00:00.321) 0:02:08.088 ******** 2026-04-16 07:19:47.562243 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:19:47.562253 | orchestrator | 2026-04-16 07:19:47.562262 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-04-16 07:19:47.562272 | orchestrator | Thursday 16 April 2026 07:19:44 +0000 (0:00:01.088) 0:02:09.177 ******** 2026-04-16 07:19:47.562308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:47.562334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:57.352833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:19:57.352946 | orchestrator | 2026-04-16 07:19:57.352964 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-04-16 07:19:57.352977 | orchestrator | Thursday 16 April 2026 07:19:47 +0000 (0:00:03.291) 0:02:12.468 ******** 2026-04-16 07:19:57.353008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:57.353044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:57.353057 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:57.353100 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:57.353112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:19:57.353124 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:57.353135 | orchestrator | 2026-04-16 07:19:57.353146 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-04-16 07:19:57.353158 | orchestrator | Thursday 16 April 2026 07:19:48 +0000 (0:00:00.454) 0:02:12.923 ******** 2026-04-16 07:19:57.353170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:57.353184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:57.353197 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:57.353233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:57.353246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:57.353258 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:57.353269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:57.353280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:19:57.353301 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:57.353312 | orchestrator | 2026-04-16 07:19:57.353323 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-04-16 07:19:57.353333 | orchestrator | Thursday 16 April 2026 07:19:49 +0000 (0:00:00.912) 0:02:13.836 ******** 2026-04-16 07:19:57.353345 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:57.353358 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:57.353370 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:57.353383 | orchestrator | 2026-04-16 07:19:57.353395 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-04-16 07:19:57.353408 | orchestrator | Thursday 16 April 2026 07:19:50 +0000 (0:00:01.205) 0:02:15.042 ******** 2026-04-16 07:19:57.353420 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:19:57.353432 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:19:57.353444 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:19:57.353457 | orchestrator | 2026-04-16 07:19:57.353474 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-04-16 07:19:57.353488 | orchestrator | Thursday 16 April 2026 07:19:52 +0000 (0:00:02.036) 0:02:17.078 ******** 2026-04-16 07:19:57.353500 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:57.353537 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:19:57.353548 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:19:57.353559 | orchestrator | 2026-04-16 07:19:57.353573 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-04-16 07:19:57.353590 | orchestrator | Thursday 16 April 2026 07:19:52 +0000 (0:00:00.393) 0:02:17.471 ******** 2026-04-16 07:19:57.353608 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:19:57.353626 | orchestrator | 2026-04-16 07:19:57.353648 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-04-16 07:19:57.353667 | orchestrator | Thursday 16 April 2026 07:19:54 +0000 (0:00:01.162) 0:02:18.634 ******** 2026-04-16 07:19:57.353705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 07:19:58.383886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 07:19:58.384019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 07:19:58.384074 | orchestrator | 2026-04-16 07:19:58.384097 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-04-16 07:19:58.384118 | orchestrator | Thursday 16 April 2026 07:19:57 +0000 (0:00:03.686) 0:02:22.320 ******** 2026-04-16 07:19:58.384151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 07:19:58.384172 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:19:58.384212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 07:20:04.338709 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:04.338861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 07:20:04.338883 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:04.338896 | orchestrator | 2026-04-16 07:20:04.338908 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-04-16 07:20:04.338942 | orchestrator | Thursday 16 April 2026 07:19:58 +0000 (0:00:01.097) 0:02:23.418 ******** 2026-04-16 07:20:04.338956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-16 07:20:04.338970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 07:20:04.338984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-16 07:20:04.338997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 07:20:04.339009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-16 07:20:04.339022 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:04.339059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-16 07:20:04.339072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 07:20:04.339083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-16 07:20:04.339095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 07:20:04.339106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-16 07:20:04.339117 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:04.339128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-16 07:20:04.339140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 07:20:04.339159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-04-16 07:20:04.339174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-04-16 07:20:04.339188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-04-16 07:20:04.339200 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:04.339214 | orchestrator | 2026-04-16 07:20:04.339228 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-04-16 07:20:04.339241 | orchestrator | Thursday 16 April 2026 07:20:00 +0000 (0:00:01.423) 0:02:24.842 ******** 2026-04-16 07:20:04.339254 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:04.339268 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:04.339280 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:04.339293 | orchestrator | 2026-04-16 07:20:04.339306 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-04-16 07:20:04.339319 | orchestrator | Thursday 16 April 2026 07:20:01 +0000 (0:00:01.216) 0:02:26.058 ******** 2026-04-16 07:20:04.339332 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:04.339345 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:04.339358 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:04.339370 | orchestrator | 2026-04-16 07:20:04.339383 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-04-16 07:20:04.339395 | orchestrator | Thursday 16 April 2026 07:20:03 +0000 (0:00:02.134) 0:02:28.192 ******** 2026-04-16 07:20:04.339408 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:04.339422 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:04.339435 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:04.339448 | orchestrator | 2026-04-16 07:20:04.339466 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-04-16 07:20:04.339536 | orchestrator | Thursday 16 April 2026 07:20:04 +0000 (0:00:00.588) 0:02:28.781 ******** 2026-04-16 07:20:04.339565 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:09.672128 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:09.672242 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:09.672257 | orchestrator | 2026-04-16 07:20:09.672270 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-04-16 07:20:09.672302 | orchestrator | Thursday 16 April 2026 07:20:04 +0000 (0:00:00.339) 0:02:29.120 ******** 2026-04-16 07:20:09.672314 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:20:09.672325 | orchestrator | 2026-04-16 07:20:09.672336 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-04-16 07:20:09.672347 | orchestrator | Thursday 16 April 2026 07:20:05 +0000 (0:00:00.942) 0:02:30.063 ******** 2026-04-16 07:20:09.672364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 07:20:09.672405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 07:20:09.672419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 07:20:09.672433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 07:20:09.672472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 07:20:09.672486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 07:20:09.672598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 07:20:09.672620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 07:20:09.672639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 07:20:09.672658 | orchestrator | 2026-04-16 07:20:09.672680 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-04-16 07:20:09.672700 | orchestrator | Thursday 16 April 2026 07:20:09 +0000 (0:00:03.898) 0:02:33.961 ******** 2026-04-16 07:20:09.672738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 07:20:11.222477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 07:20:11.222654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 07:20:11.222669 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:11.222681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 07:20:11.222691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 07:20:11.222713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 07:20:11.222721 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:11.222745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 07:20:11.222773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 07:20:11.222782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 07:20:11.222815 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:11.222825 | orchestrator | 2026-04-16 07:20:11.222834 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-04-16 07:20:11.222843 | orchestrator | Thursday 16 April 2026 07:20:10 +0000 (0:00:00.612) 0:02:34.574 ******** 2026-04-16 07:20:11.222852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-16 07:20:11.222862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-16 07:20:11.222871 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:11.222878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-16 07:20:11.222886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-16 07:20:11.222893 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:11.222908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-16 07:20:11.222920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-04-16 07:20:11.222927 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:11.222934 | orchestrator | 2026-04-16 07:20:11.222942 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-04-16 07:20:11.222955 | orchestrator | Thursday 16 April 2026 07:20:11 +0000 (0:00:01.214) 0:02:35.789 ******** 2026-04-16 07:20:20.363136 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:20.363238 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:20.363251 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:20.363262 | orchestrator | 2026-04-16 07:20:20.363272 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-04-16 07:20:20.363283 | orchestrator | Thursday 16 April 2026 07:20:12 +0000 (0:00:01.214) 0:02:37.003 ******** 2026-04-16 07:20:20.363292 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:20.363301 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:20.363310 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:20.363319 | orchestrator | 2026-04-16 07:20:20.363329 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-04-16 07:20:20.363338 | orchestrator | Thursday 16 April 2026 07:20:14 +0000 (0:00:02.152) 0:02:39.156 ******** 2026-04-16 07:20:20.363348 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:20.363357 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:20.363366 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:20.363375 | orchestrator | 2026-04-16 07:20:20.363384 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-04-16 07:20:20.363393 | orchestrator | Thursday 16 April 2026 07:20:14 +0000 (0:00:00.327) 0:02:39.483 ******** 2026-04-16 07:20:20.363402 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:20:20.363411 | orchestrator | 2026-04-16 07:20:20.363420 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-04-16 07:20:20.363429 | orchestrator | Thursday 16 April 2026 07:20:16 +0000 (0:00:01.262) 0:02:40.745 ******** 2026-04-16 07:20:20.363443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:20:20.363457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:20:20.363550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:20:20.363580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:20:20.363591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:20:20.363601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:20:20.363617 | orchestrator | 2026-04-16 07:20:20.363627 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-04-16 07:20:20.363637 | orchestrator | Thursday 16 April 2026 07:20:19 +0000 (0:00:03.803) 0:02:44.549 ******** 2026-04-16 07:20:20.363647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:20:20.363667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:20:30.597092 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:30.597255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:20:30.597286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:20:30.597305 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:30.597363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:20:30.597404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:20:30.597426 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:30.597443 | orchestrator | 2026-04-16 07:20:30.597632 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-04-16 07:20:30.597663 | orchestrator | Thursday 16 April 2026 07:20:20 +0000 (0:00:00.722) 0:02:45.272 ******** 2026-04-16 07:20:30.597710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:30.597733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:30.597757 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:30.597778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:30.597798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:30.597817 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:30.597838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:30.597858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:30.597878 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:30.597896 | orchestrator | 2026-04-16 07:20:30.597916 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-04-16 07:20:30.597955 | orchestrator | Thursday 16 April 2026 07:20:22 +0000 (0:00:01.486) 0:02:46.759 ******** 2026-04-16 07:20:30.597976 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:30.597997 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:30.598093 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:30.598115 | orchestrator | 2026-04-16 07:20:30.598132 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-04-16 07:20:30.598149 | orchestrator | Thursday 16 April 2026 07:20:23 +0000 (0:00:01.214) 0:02:47.973 ******** 2026-04-16 07:20:30.598167 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:30.598183 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:30.598199 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:30.598216 | orchestrator | 2026-04-16 07:20:30.598232 | orchestrator | TASK [include_role : manila] *************************************************** 2026-04-16 07:20:30.598248 | orchestrator | Thursday 16 April 2026 07:20:25 +0000 (0:00:02.224) 0:02:50.197 ******** 2026-04-16 07:20:30.598265 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:20:30.598280 | orchestrator | 2026-04-16 07:20:30.598297 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-04-16 07:20:30.598315 | orchestrator | Thursday 16 April 2026 07:20:26 +0000 (0:00:01.285) 0:02:51.483 ******** 2026-04-16 07:20:30.598333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:20:30.598353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:20:30.598392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.488889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:20:31.489204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:20:31.489346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 07:20:31.489401 | orchestrator | 2026-04-16 07:20:31.489422 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-04-16 07:20:31.489442 | orchestrator | Thursday 16 April 2026 07:20:31 +0000 (0:00:04.221) 0:02:55.704 ******** 2026-04-16 07:20:31.489493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:20:31.489525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623313 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:32.623330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:20:32.623364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623450 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:32.623494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:20:32.623511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 07:20:32.623552 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:32.623564 | orchestrator | 2026-04-16 07:20:32.623576 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-04-16 07:20:32.623589 | orchestrator | Thursday 16 April 2026 07:20:31 +0000 (0:00:00.693) 0:02:56.398 ******** 2026-04-16 07:20:32.623602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:32.623626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:32.623642 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:32.623655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:32.623677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:44.130009 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:44.130183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:44.130201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:20:44.130214 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:44.130224 | orchestrator | 2026-04-16 07:20:44.130236 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-04-16 07:20:44.130247 | orchestrator | Thursday 16 April 2026 07:20:33 +0000 (0:00:01.343) 0:02:57.742 ******** 2026-04-16 07:20:44.130257 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:44.130267 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:44.130277 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:44.130287 | orchestrator | 2026-04-16 07:20:44.130297 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-04-16 07:20:44.130307 | orchestrator | Thursday 16 April 2026 07:20:34 +0000 (0:00:01.225) 0:02:58.968 ******** 2026-04-16 07:20:44.130317 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:44.130327 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:44.130337 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:44.130347 | orchestrator | 2026-04-16 07:20:44.130357 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-04-16 07:20:44.130366 | orchestrator | Thursday 16 April 2026 07:20:36 +0000 (0:00:02.174) 0:03:01.143 ******** 2026-04-16 07:20:44.130376 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:20:44.130386 | orchestrator | 2026-04-16 07:20:44.130395 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-04-16 07:20:44.130405 | orchestrator | Thursday 16 April 2026 07:20:38 +0000 (0:00:01.687) 0:03:02.830 ******** 2026-04-16 07:20:44.130415 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:20:44.130425 | orchestrator | 2026-04-16 07:20:44.130435 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-04-16 07:20:44.130476 | orchestrator | Thursday 16 April 2026 07:20:41 +0000 (0:00:03.324) 0:03:06.155 ******** 2026-04-16 07:20:44.130518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:20:44.130588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 07:20:44.130605 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:44.130618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:20:44.130632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 07:20:44.130656 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:44.130681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:20:46.788948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 07:20:46.789054 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:46.789071 | orchestrator | 2026-04-16 07:20:46.789084 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-04-16 07:20:46.789096 | orchestrator | Thursday 16 April 2026 07:20:44 +0000 (0:00:02.666) 0:03:08.822 ******** 2026-04-16 07:20:46.789129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:20:46.789172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 07:20:46.789193 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:46.789239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:20:46.789260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 07:20:46.789281 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:46.789300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:20:46.789321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-04-16 07:20:57.519611 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:57.519705 | orchestrator | 2026-04-16 07:20:57.519717 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-04-16 07:20:57.519726 | orchestrator | Thursday 16 April 2026 07:20:47 +0000 (0:00:02.937) 0:03:11.759 ******** 2026-04-16 07:20:57.519736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 07:20:57.519748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 07:20:57.519773 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:57.519781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 07:20:57.519800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 07:20:57.519808 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:57.519815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 07:20:57.519822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-04-16 07:20:57.519830 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:57.519837 | orchestrator | 2026-04-16 07:20:57.519844 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-04-16 07:20:57.519851 | orchestrator | Thursday 16 April 2026 07:20:49 +0000 (0:00:02.742) 0:03:14.502 ******** 2026-04-16 07:20:57.519858 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:20:57.519878 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:20:57.519886 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:20:57.519892 | orchestrator | 2026-04-16 07:20:57.519899 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-04-16 07:20:57.519906 | orchestrator | Thursday 16 April 2026 07:20:52 +0000 (0:00:02.073) 0:03:16.575 ******** 2026-04-16 07:20:57.519913 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:57.519919 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:57.519926 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:57.519933 | orchestrator | 2026-04-16 07:20:57.519940 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-04-16 07:20:57.519947 | orchestrator | Thursday 16 April 2026 07:20:53 +0000 (0:00:01.640) 0:03:18.216 ******** 2026-04-16 07:20:57.519959 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:20:57.519966 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:20:57.519972 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:20:57.519979 | orchestrator | 2026-04-16 07:20:57.519986 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-04-16 07:20:57.519992 | orchestrator | Thursday 16 April 2026 07:20:54 +0000 (0:00:00.632) 0:03:18.848 ******** 2026-04-16 07:20:57.519999 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:20:57.520006 | orchestrator | 2026-04-16 07:20:57.520013 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-04-16 07:20:57.520019 | orchestrator | Thursday 16 April 2026 07:20:55 +0000 (0:00:01.227) 0:03:20.076 ******** 2026-04-16 07:20:57.520027 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 07:20:57.520039 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 07:20:57.520047 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 07:20:57.520054 | orchestrator | 2026-04-16 07:20:57.520061 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-04-16 07:20:57.520068 | orchestrator | Thursday 16 April 2026 07:20:57 +0000 (0:00:01.869) 0:03:21.946 ******** 2026-04-16 07:20:57.520080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 07:21:06.906729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 07:21:06.906843 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:06.906860 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:06.906880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 07:21:06.906901 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:06.906920 | orchestrator | 2026-04-16 07:21:06.906941 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-04-16 07:21:06.906987 | orchestrator | Thursday 16 April 2026 07:20:57 +0000 (0:00:00.442) 0:03:22.388 ******** 2026-04-16 07:21:06.907017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-16 07:21:06.907031 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:06.907042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-16 07:21:06.907054 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:06.907065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-04-16 07:21:06.907076 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:06.907088 | orchestrator | 2026-04-16 07:21:06.907099 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-04-16 07:21:06.907110 | orchestrator | Thursday 16 April 2026 07:20:58 +0000 (0:00:00.763) 0:03:23.151 ******** 2026-04-16 07:21:06.907121 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:06.907132 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:06.907143 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:06.907154 | orchestrator | 2026-04-16 07:21:06.907165 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-04-16 07:21:06.907176 | orchestrator | Thursday 16 April 2026 07:20:59 +0000 (0:00:00.917) 0:03:24.069 ******** 2026-04-16 07:21:06.907227 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:06.907240 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:06.907253 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:06.907266 | orchestrator | 2026-04-16 07:21:06.907279 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-04-16 07:21:06.907292 | orchestrator | Thursday 16 April 2026 07:21:00 +0000 (0:00:01.472) 0:03:25.541 ******** 2026-04-16 07:21:06.907305 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:06.907317 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:06.907330 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:06.907343 | orchestrator | 2026-04-16 07:21:06.907356 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-04-16 07:21:06.907368 | orchestrator | Thursday 16 April 2026 07:21:01 +0000 (0:00:00.360) 0:03:25.902 ******** 2026-04-16 07:21:06.907380 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:21:06.907393 | orchestrator | 2026-04-16 07:21:06.907408 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-04-16 07:21:06.907450 | orchestrator | Thursday 16 April 2026 07:21:02 +0000 (0:00:01.515) 0:03:27.417 ******** 2026-04-16 07:21:06.907487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:06.907507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:06.907529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-16 07:21:06.907552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-16 07:21:06.907577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.083940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.084032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.084061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 07:21:07.084072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:07.084124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.084135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-16 07:21:07.084159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.084169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.084184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 07:21:07.084201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:07.084210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:07.084225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.211956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-16 07:21:07.212082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-16 07:21:07.212142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.212170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.212217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:07.212241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.212273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.212307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 07:21:07.212330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-16 07:21:07.212374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:07.212431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.321371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-16 07:21:07.321668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-16 07:21:07.321704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.321726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.321748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.321770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:07.321817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:07.321907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 07:21:07.321949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 07:21:07.321970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:07.321989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:07.322131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:08.616268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-16 07:21:08.616371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:08.616382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:08.616392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 07:21:08.616402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:08.616480 | orchestrator | 2026-04-16 07:21:08.616491 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-04-16 07:21:08.616499 | orchestrator | Thursday 16 April 2026 07:21:07 +0000 (0:00:04.663) 0:03:32.081 ******** 2026-04-16 07:21:08.616526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:08.616541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:08.616550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-16 07:21:08.616557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-16 07:21:08.616569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:08.904487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:08.904597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:08.904609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 07:21:08.904619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:08.904630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:08.904639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-16 07:21:08.904684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:08.904697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:08.904706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 07:21:08.904714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:08.904723 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:08.904733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:08.904752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:09.031854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-16 07:21:09.031943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-16 07:21:09.031954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:09.031962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:09.031990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:09.032020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 07:21:09.032029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:09.032037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:09.032045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:09.032052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20260328', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:09.032069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-16 07:21:09.130319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20260328', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-04-16 07:21:09.130516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:09.130540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20260328', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-04-16 07:21:09.130577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:09.130590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:09.130631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 07:21:09.130647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20260328', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:09.130661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:09.130673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:09.130702 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:09.130724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 07:21:09.130745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:09.130784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:19.790572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-04-16 07:21:19.790680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20260328', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-04-16 07:21:19.790701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20260328', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-04-16 07:21:19.790751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-04-16 07:21:19.790786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20260328', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-04-16 07:21:19.790806 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:19.790826 | orchestrator | 2026-04-16 07:21:19.790845 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-04-16 07:21:19.790863 | orchestrator | Thursday 16 April 2026 07:21:09 +0000 (0:00:01.787) 0:03:33.869 ******** 2026-04-16 07:21:19.790882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:19.790927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:19.790947 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:19.790964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:19.790982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:19.791000 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:19.791017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:19.791035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:19.791067 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:19.791085 | orchestrator | 2026-04-16 07:21:19.791100 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-04-16 07:21:19.791111 | orchestrator | Thursday 16 April 2026 07:21:11 +0000 (0:00:01.953) 0:03:35.822 ******** 2026-04-16 07:21:19.791122 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:19.791135 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:19.791146 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:19.791158 | orchestrator | 2026-04-16 07:21:19.791169 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-04-16 07:21:19.791179 | orchestrator | Thursday 16 April 2026 07:21:12 +0000 (0:00:01.269) 0:03:37.092 ******** 2026-04-16 07:21:19.791191 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:19.791201 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:19.791212 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:19.791223 | orchestrator | 2026-04-16 07:21:19.791233 | orchestrator | TASK [include_role : placement] ************************************************ 2026-04-16 07:21:19.791244 | orchestrator | Thursday 16 April 2026 07:21:14 +0000 (0:00:02.225) 0:03:39.318 ******** 2026-04-16 07:21:19.791255 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:21:19.791265 | orchestrator | 2026-04-16 07:21:19.791276 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-04-16 07:21:19.791288 | orchestrator | Thursday 16 April 2026 07:21:16 +0000 (0:00:01.601) 0:03:40.919 ******** 2026-04-16 07:21:19.791301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 07:21:19.791332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 07:21:30.382346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 07:21:30.382546 | orchestrator | 2026-04-16 07:21:30.382571 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-04-16 07:21:30.382581 | orchestrator | Thursday 16 April 2026 07:21:19 +0000 (0:00:03.559) 0:03:44.478 ******** 2026-04-16 07:21:30.382591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 07:21:30.382599 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:30.382620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 07:21:30.382629 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:30.382654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 07:21:30.382672 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:30.382678 | orchestrator | 2026-04-16 07:21:30.382684 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-04-16 07:21:30.382691 | orchestrator | Thursday 16 April 2026 07:21:20 +0000 (0:00:00.553) 0:03:45.032 ******** 2026-04-16 07:21:30.382698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:21:30.382708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:21:30.382716 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:30.382722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:21:30.382728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:21:30.382734 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:30.382740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:21:30.382746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:21:30.382753 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:30.382759 | orchestrator | 2026-04-16 07:21:30.382766 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-04-16 07:21:30.382772 | orchestrator | Thursday 16 April 2026 07:21:21 +0000 (0:00:01.164) 0:03:46.197 ******** 2026-04-16 07:21:30.382778 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:30.382786 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:30.382792 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:30.382799 | orchestrator | 2026-04-16 07:21:30.382805 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-04-16 07:21:30.382811 | orchestrator | Thursday 16 April 2026 07:21:22 +0000 (0:00:01.231) 0:03:47.428 ******** 2026-04-16 07:21:30.382818 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:30.382823 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:30.382830 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:30.382836 | orchestrator | 2026-04-16 07:21:30.382842 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-04-16 07:21:30.382848 | orchestrator | Thursday 16 April 2026 07:21:24 +0000 (0:00:02.127) 0:03:49.556 ******** 2026-04-16 07:21:30.382856 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:21:30.382863 | orchestrator | 2026-04-16 07:21:30.382868 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-04-16 07:21:30.382874 | orchestrator | Thursday 16 April 2026 07:21:26 +0000 (0:00:01.581) 0:03:51.138 ******** 2026-04-16 07:21:30.382894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:32.425820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:32.425909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:32.425934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:32.425959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:21:32.425983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:21:32.425992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:32.426000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:21:32.426007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:21:32.426059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:21:32.426080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:21:33.619798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:21:33.619870 | orchestrator | 2026-04-16 07:21:33.619877 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-04-16 07:21:33.619883 | orchestrator | Thursday 16 April 2026 07:21:32 +0000 (0:00:05.972) 0:03:57.110 ******** 2026-04-16 07:21:33.619891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:33.619910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:33.619930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:21:33.619946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:21:33.619952 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:33.619958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:33.619963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:33.619976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:21:33.619981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:21:33.619985 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:33.619994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:45.984713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:21:45.984864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 07:21:45.984945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 07:21:45.984970 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:45.984993 | orchestrator | 2026-04-16 07:21:45.985015 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-04-16 07:21:45.985037 | orchestrator | Thursday 16 April 2026 07:21:34 +0000 (0:00:01.500) 0:03:58.610 ******** 2026-04-16 07:21:45.985057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985140 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:45.985161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985269 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:45.985289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:21:45.985493 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:45.985513 | orchestrator | 2026-04-16 07:21:45.985532 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-04-16 07:21:45.985549 | orchestrator | Thursday 16 April 2026 07:21:35 +0000 (0:00:01.073) 0:03:59.684 ******** 2026-04-16 07:21:45.985564 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:45.985581 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:45.985597 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:45.985612 | orchestrator | 2026-04-16 07:21:45.985629 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-04-16 07:21:45.985645 | orchestrator | Thursday 16 April 2026 07:21:36 +0000 (0:00:01.195) 0:04:00.880 ******** 2026-04-16 07:21:45.985660 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:45.985676 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:45.985692 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:45.985708 | orchestrator | 2026-04-16 07:21:45.985724 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-04-16 07:21:45.985740 | orchestrator | Thursday 16 April 2026 07:21:38 +0000 (0:00:02.272) 0:04:03.152 ******** 2026-04-16 07:21:45.985755 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:21:45.985770 | orchestrator | 2026-04-16 07:21:45.985786 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-04-16 07:21:45.985803 | orchestrator | Thursday 16 April 2026 07:21:40 +0000 (0:00:01.936) 0:04:05.089 ******** 2026-04-16 07:21:45.985820 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-04-16 07:21:45.985839 | orchestrator | 2026-04-16 07:21:45.985856 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-04-16 07:21:45.985872 | orchestrator | Thursday 16 April 2026 07:21:42 +0000 (0:00:01.521) 0:04:06.610 ******** 2026-04-16 07:21:45.985891 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-16 07:21:45.985913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-16 07:21:45.985949 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-04-16 07:21:58.181626 | orchestrator | 2026-04-16 07:21:58.181761 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-04-16 07:21:58.181860 | orchestrator | Thursday 16 April 2026 07:21:46 +0000 (0:00:04.045) 0:04:10.656 ******** 2026-04-16 07:21:58.181881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.181896 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:58.181909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.181922 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:58.181997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.182116 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:58.182129 | orchestrator | 2026-04-16 07:21:58.182150 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-04-16 07:21:58.182163 | orchestrator | Thursday 16 April 2026 07:21:47 +0000 (0:00:01.213) 0:04:11.869 ******** 2026-04-16 07:21:58.182177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 07:21:58.182195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 07:21:58.182209 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:58.182222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 07:21:58.182235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 07:21:58.182247 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:58.182260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 07:21:58.182295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-04-16 07:21:58.182308 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:58.182321 | orchestrator | 2026-04-16 07:21:58.182334 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-16 07:21:58.182370 | orchestrator | Thursday 16 April 2026 07:21:49 +0000 (0:00:01.751) 0:04:13.620 ******** 2026-04-16 07:21:58.182383 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:58.182396 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:58.182408 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:58.182420 | orchestrator | 2026-04-16 07:21:58.182434 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-16 07:21:58.182446 | orchestrator | Thursday 16 April 2026 07:21:51 +0000 (0:00:02.211) 0:04:15.832 ******** 2026-04-16 07:21:58.182458 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:21:58.182471 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:21:58.182505 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:21:58.182516 | orchestrator | 2026-04-16 07:21:58.182527 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-04-16 07:21:58.182538 | orchestrator | Thursday 16 April 2026 07:21:54 +0000 (0:00:03.212) 0:04:19.044 ******** 2026-04-16 07:21:58.182550 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-04-16 07:21:58.182562 | orchestrator | 2026-04-16 07:21:58.182573 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-04-16 07:21:58.182584 | orchestrator | Thursday 16 April 2026 07:21:55 +0000 (0:00:00.954) 0:04:19.999 ******** 2026-04-16 07:21:58.182595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.182608 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:58.182619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.182630 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:58.182646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.182658 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:21:58.182669 | orchestrator | 2026-04-16 07:21:58.182680 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-04-16 07:21:58.182699 | orchestrator | Thursday 16 April 2026 07:21:56 +0000 (0:00:01.484) 0:04:21.483 ******** 2026-04-16 07:21:58.182710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.182721 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:21:58.182732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:21:58.182744 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:21:58.182762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-04-16 07:22:22.517446 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:22.517562 | orchestrator | 2026-04-16 07:22:22.517576 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-04-16 07:22:22.517587 | orchestrator | Thursday 16 April 2026 07:21:58 +0000 (0:00:01.344) 0:04:22.828 ******** 2026-04-16 07:22:22.517595 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:22.517601 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:22.517607 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:22.517613 | orchestrator | 2026-04-16 07:22:22.517620 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-16 07:22:22.517627 | orchestrator | Thursday 16 April 2026 07:22:00 +0000 (0:00:01.784) 0:04:24.613 ******** 2026-04-16 07:22:22.517634 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:22:22.517682 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:22:22.517689 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:22:22.517693 | orchestrator | 2026-04-16 07:22:22.517698 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-16 07:22:22.517703 | orchestrator | Thursday 16 April 2026 07:22:02 +0000 (0:00:02.711) 0:04:27.325 ******** 2026-04-16 07:22:22.517707 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:22:22.517711 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:22:22.517715 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:22:22.517719 | orchestrator | 2026-04-16 07:22:22.517723 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-04-16 07:22:22.517727 | orchestrator | Thursday 16 April 2026 07:22:05 +0000 (0:00:03.170) 0:04:30.495 ******** 2026-04-16 07:22:22.517731 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-04-16 07:22:22.517737 | orchestrator | 2026-04-16 07:22:22.517741 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-04-16 07:22:22.517744 | orchestrator | Thursday 16 April 2026 07:22:06 +0000 (0:00:00.894) 0:04:31.390 ******** 2026-04-16 07:22:22.517760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 07:22:22.517779 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:22.517784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 07:22:22.517788 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:22.517792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 07:22:22.517796 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:22.517800 | orchestrator | 2026-04-16 07:22:22.517804 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-04-16 07:22:22.517809 | orchestrator | Thursday 16 April 2026 07:22:08 +0000 (0:00:01.482) 0:04:32.872 ******** 2026-04-16 07:22:22.517813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 07:22:22.517817 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:22.517836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 07:22:22.517840 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:22.517844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-04-16 07:22:22.517848 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:22.517852 | orchestrator | 2026-04-16 07:22:22.517855 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-04-16 07:22:22.517864 | orchestrator | Thursday 16 April 2026 07:22:09 +0000 (0:00:01.424) 0:04:34.297 ******** 2026-04-16 07:22:22.517867 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:22.517871 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:22.517875 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:22.517879 | orchestrator | 2026-04-16 07:22:22.517883 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-04-16 07:22:22.517886 | orchestrator | Thursday 16 April 2026 07:22:11 +0000 (0:00:01.554) 0:04:35.852 ******** 2026-04-16 07:22:22.517890 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:22:22.517894 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:22:22.517898 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:22:22.517902 | orchestrator | 2026-04-16 07:22:22.517905 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-04-16 07:22:22.517909 | orchestrator | Thursday 16 April 2026 07:22:13 +0000 (0:00:02.550) 0:04:38.403 ******** 2026-04-16 07:22:22.517913 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:22:22.517917 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:22:22.517921 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:22:22.517924 | orchestrator | 2026-04-16 07:22:22.517928 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-04-16 07:22:22.517932 | orchestrator | Thursday 16 April 2026 07:22:17 +0000 (0:00:03.959) 0:04:42.362 ******** 2026-04-16 07:22:22.517938 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:22:22.517943 | orchestrator | 2026-04-16 07:22:22.517946 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-04-16 07:22:22.517951 | orchestrator | Thursday 16 April 2026 07:22:19 +0000 (0:00:01.335) 0:04:43.697 ******** 2026-04-16 07:22:22.517957 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 07:22:22.517964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 07:22:22.517973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 07:22:22.830563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 07:22:22.830692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:22:22.830725 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 07:22:22.830741 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 07:22:22.830753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 07:22:22.830785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 07:22:22.830808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 07:22:22.830820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 07:22:22.830838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 07:22:22.830850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 07:22:22.830862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:22:22.830874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:22:22.830894 | orchestrator | 2026-04-16 07:22:22.830913 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-04-16 07:22:23.910978 | orchestrator | Thursday 16 April 2026 07:22:22 +0000 (0:00:03.705) 0:04:47.403 ******** 2026-04-16 07:22:23.911089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 07:22:23.911112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 07:22:23.911142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 07:22:23.911156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 07:22:23.911169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:22:23.911203 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:23.911237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 07:22:23.911251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 07:22:23.911269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 07:22:23.911280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 07:22:23.911292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:22:23.911303 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:23.911343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 07:22:23.911373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 07:22:35.856457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 07:22:35.856631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 07:22:35.856665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 07:22:35.856687 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:35.856707 | orchestrator | 2026-04-16 07:22:35.856728 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-04-16 07:22:35.856749 | orchestrator | Thursday 16 April 2026 07:22:24 +0000 (0:00:01.225) 0:04:48.628 ******** 2026-04-16 07:22:35.856770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 07:22:35.856791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 07:22:35.856850 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:35.856877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 07:22:35.856897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 07:22:35.856916 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:35.856933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 07:22:35.856953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-04-16 07:22:35.856972 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:35.856990 | orchestrator | 2026-04-16 07:22:35.857009 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-04-16 07:22:35.857029 | orchestrator | Thursday 16 April 2026 07:22:25 +0000 (0:00:00.968) 0:04:49.597 ******** 2026-04-16 07:22:35.857046 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:22:35.857067 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:22:35.857087 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:22:35.857107 | orchestrator | 2026-04-16 07:22:35.857127 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-04-16 07:22:35.857140 | orchestrator | Thursday 16 April 2026 07:22:26 +0000 (0:00:01.544) 0:04:51.141 ******** 2026-04-16 07:22:35.857150 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:22:35.857161 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:22:35.857197 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:22:35.857209 | orchestrator | 2026-04-16 07:22:35.857220 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-04-16 07:22:35.857231 | orchestrator | Thursday 16 April 2026 07:22:28 +0000 (0:00:02.319) 0:04:53.460 ******** 2026-04-16 07:22:35.857242 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:22:35.857253 | orchestrator | 2026-04-16 07:22:35.857264 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-04-16 07:22:35.857275 | orchestrator | Thursday 16 April 2026 07:22:30 +0000 (0:00:01.424) 0:04:54.885 ******** 2026-04-16 07:22:35.857329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:22:35.857350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:22:35.857375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:22:35.857399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:22:37.004036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:22:37.004167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:22:37.004205 | orchestrator | 2026-04-16 07:22:37.004220 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-04-16 07:22:37.004232 | orchestrator | Thursday 16 April 2026 07:22:36 +0000 (0:00:06.074) 0:05:00.960 ******** 2026-04-16 07:22:37.004244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:22:37.004277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:22:37.004291 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:37.004384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:22:37.004419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:22:37.004433 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:37.004444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:22:37.004467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:22:44.381733 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:44.381826 | orchestrator | 2026-04-16 07:22:44.381837 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-04-16 07:22:44.381847 | orchestrator | Thursday 16 April 2026 07:22:37 +0000 (0:00:00.718) 0:05:01.679 ******** 2026-04-16 07:22:44.381869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:44.381897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-16 07:22:44.381908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-16 07:22:44.381917 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:44.381925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:44.381932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-16 07:22:44.381940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-16 07:22:44.381947 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:44.381954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:44.381962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-16 07:22:44.381969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-04-16 07:22:44.381977 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:44.381984 | orchestrator | 2026-04-16 07:22:44.381992 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-04-16 07:22:44.382000 | orchestrator | Thursday 16 April 2026 07:22:38 +0000 (0:00:01.023) 0:05:02.702 ******** 2026-04-16 07:22:44.382007 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:44.382061 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:44.382072 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:44.382080 | orchestrator | 2026-04-16 07:22:44.382087 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-04-16 07:22:44.382094 | orchestrator | Thursday 16 April 2026 07:22:39 +0000 (0:00:00.898) 0:05:03.601 ******** 2026-04-16 07:22:44.382102 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:44.382109 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:44.382116 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:44.382123 | orchestrator | 2026-04-16 07:22:44.382131 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-04-16 07:22:44.382138 | orchestrator | Thursday 16 April 2026 07:22:40 +0000 (0:00:01.427) 0:05:05.029 ******** 2026-04-16 07:22:44.382151 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:22:44.382160 | orchestrator | 2026-04-16 07:22:44.382167 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-04-16 07:22:44.382174 | orchestrator | Thursday 16 April 2026 07:22:41 +0000 (0:00:01.427) 0:05:06.457 ******** 2026-04-16 07:22:44.382203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 07:22:44.382215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 07:22:44.382224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:44.382233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:44.382241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 07:22:44.382255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 07:22:46.278682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 07:22:46.278814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:46.278841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:46.278888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 07:22:46.278928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 07:22:46.278983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 07:22:46.279038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:46.279060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:46.279079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 07:22:46.279098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:22:46.279118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-16 07:22:46.279152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:46.279191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.341071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.341159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:22:47.341171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-16 07:22:47.341213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.341230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.341274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.341357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:22:47.341371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-16 07:22:47.341381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.341401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.341411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.341422 | orchestrator | 2026-04-16 07:22:47.341436 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-04-16 07:22:47.341448 | orchestrator | Thursday 16 April 2026 07:22:46 +0000 (0:00:04.994) 0:05:11.452 ******** 2026-04-16 07:22:47.341478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-16 07:22:47.477136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 07:22:47.477240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.477257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.477350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.477381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:22:47.477417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-16 07:22:47.477446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-16 07:22:47.477470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 07:22:47.477482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.477494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.477511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.477522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.477543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.631941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.632074 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:47.632098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:22:47.632111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-16 07:22:47.632125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.632137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.632171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.632183 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:47.632245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-16 07:22:47.632269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 07:22:47.632281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.632364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:47.632377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 07:22:47.632399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:22:54.906890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20260328', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-04-16 07:22:54.907000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:54.907014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:22:54.907043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 07:22:54.907055 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:54.907069 | orchestrator | 2026-04-16 07:22:54.907081 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-04-16 07:22:54.907094 | orchestrator | Thursday 16 April 2026 07:22:47 +0000 (0:00:00.925) 0:05:12.377 ******** 2026-04-16 07:22:54.907106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-16 07:22:54.907117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-16 07:22:54.907144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:54.907186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:54.907196 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:54.907202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-16 07:22:54.907209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-16 07:22:54.907215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:54.907222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:54.907228 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:54.907234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-16 07:22:54.907241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-04-16 07:22:54.907252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:54.907258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-04-16 07:22:54.907265 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:54.907344 | orchestrator | 2026-04-16 07:22:54.907353 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-04-16 07:22:54.907359 | orchestrator | Thursday 16 April 2026 07:22:49 +0000 (0:00:01.417) 0:05:13.795 ******** 2026-04-16 07:22:54.907366 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:54.907372 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:54.907378 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:54.907384 | orchestrator | 2026-04-16 07:22:54.907390 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-04-16 07:22:54.907396 | orchestrator | Thursday 16 April 2026 07:22:49 +0000 (0:00:00.499) 0:05:14.294 ******** 2026-04-16 07:22:54.907402 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:22:54.907409 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:22:54.907415 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:22:54.907421 | orchestrator | 2026-04-16 07:22:54.907427 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-04-16 07:22:54.907433 | orchestrator | Thursday 16 April 2026 07:22:51 +0000 (0:00:01.494) 0:05:15.789 ******** 2026-04-16 07:22:54.907439 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:22:54.907446 | orchestrator | 2026-04-16 07:22:54.907452 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-04-16 07:22:54.907458 | orchestrator | Thursday 16 April 2026 07:22:52 +0000 (0:00:01.704) 0:05:17.493 ******** 2026-04-16 07:22:54.907472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:23:04.630100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:23:04.630214 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:23:04.630241 | orchestrator | 2026-04-16 07:23:04.630250 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-04-16 07:23:04.630258 | orchestrator | Thursday 16 April 2026 07:22:55 +0000 (0:00:02.507) 0:05:20.001 ******** 2026-04-16 07:23:04.630334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:23:04.630344 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:04.630365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:23:04.630372 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:04.630379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:23:04.630386 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:04.630392 | orchestrator | 2026-04-16 07:23:04.630399 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-04-16 07:23:04.630411 | orchestrator | Thursday 16 April 2026 07:22:55 +0000 (0:00:00.459) 0:05:20.460 ******** 2026-04-16 07:23:04.630422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-16 07:23:04.630430 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:04.630436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-16 07:23:04.630443 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:04.630452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-04-16 07:23:04.630463 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:04.630472 | orchestrator | 2026-04-16 07:23:04.630481 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-04-16 07:23:04.630491 | orchestrator | Thursday 16 April 2026 07:22:56 +0000 (0:00:01.060) 0:05:21.520 ******** 2026-04-16 07:23:04.630500 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:04.630509 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:04.630525 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:04.630536 | orchestrator | 2026-04-16 07:23:04.630547 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-04-16 07:23:04.630558 | orchestrator | Thursday 16 April 2026 07:22:57 +0000 (0:00:00.508) 0:05:22.029 ******** 2026-04-16 07:23:04.630568 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:04.630578 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:04.630588 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:04.630598 | orchestrator | 2026-04-16 07:23:04.630610 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-04-16 07:23:04.630621 | orchestrator | Thursday 16 April 2026 07:22:59 +0000 (0:00:01.550) 0:05:23.580 ******** 2026-04-16 07:23:04.630633 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:23:04.630644 | orchestrator | 2026-04-16 07:23:04.630656 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-04-16 07:23:04.630667 | orchestrator | Thursday 16 April 2026 07:23:00 +0000 (0:00:01.910) 0:05:25.491 ******** 2026-04-16 07:23:04.630678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 07:23:04.630696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 07:23:08.313749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 07:23:08.313865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 07:23:08.313885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 07:23:08.313918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 07:23:08.313954 | orchestrator | 2026-04-16 07:23:08.313969 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-04-16 07:23:08.313981 | orchestrator | Thursday 16 April 2026 07:23:07 +0000 (0:00:06.848) 0:05:32.339 ******** 2026-04-16 07:23:08.314001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 07:23:08.314014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 07:23:08.314093 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:08.314115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 07:23:08.314160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 07:23:19.160086 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 07:23:19.160220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 07:23:19.160231 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160241 | orchestrator | 2026-04-16 07:23:19.160284 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-04-16 07:23:19.160295 | orchestrator | Thursday 16 April 2026 07:23:08 +0000 (0:00:00.762) 0:05:33.101 ******** 2026-04-16 07:23:19.160348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-16 07:23:19.160379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-16 07:23:19.160391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:23:19.160401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:23:19.160410 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:19.160419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-16 07:23:19.160428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-16 07:23:19.160458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:23:19.160469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:23:19.160477 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-16 07:23:19.160495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-04-16 07:23:19.160504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:23:19.160513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-04-16 07:23:19.160522 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160531 | orchestrator | 2026-04-16 07:23:19.160540 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-04-16 07:23:19.160549 | orchestrator | Thursday 16 April 2026 07:23:10 +0000 (0:00:01.814) 0:05:34.916 ******** 2026-04-16 07:23:19.160557 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:23:19.160566 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:23:19.160575 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:23:19.160583 | orchestrator | 2026-04-16 07:23:19.160592 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-04-16 07:23:19.160606 | orchestrator | Thursday 16 April 2026 07:23:11 +0000 (0:00:01.223) 0:05:36.139 ******** 2026-04-16 07:23:19.160615 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:23:19.160624 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:23:19.160632 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:23:19.160642 | orchestrator | 2026-04-16 07:23:19.160653 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-04-16 07:23:19.160663 | orchestrator | Thursday 16 April 2026 07:23:13 +0000 (0:00:02.278) 0:05:38.418 ******** 2026-04-16 07:23:19.160673 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:19.160683 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160693 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160703 | orchestrator | 2026-04-16 07:23:19.160713 | orchestrator | TASK [include_role : trove] **************************************************** 2026-04-16 07:23:19.160723 | orchestrator | Thursday 16 April 2026 07:23:14 +0000 (0:00:00.360) 0:05:38.778 ******** 2026-04-16 07:23:19.160733 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:19.160743 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160753 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160762 | orchestrator | 2026-04-16 07:23:19.160772 | orchestrator | TASK [include_role : venus] **************************************************** 2026-04-16 07:23:19.160783 | orchestrator | Thursday 16 April 2026 07:23:14 +0000 (0:00:00.722) 0:05:39.501 ******** 2026-04-16 07:23:19.160792 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:19.160802 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160812 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160821 | orchestrator | 2026-04-16 07:23:19.160831 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-04-16 07:23:19.160841 | orchestrator | Thursday 16 April 2026 07:23:15 +0000 (0:00:00.360) 0:05:39.862 ******** 2026-04-16 07:23:19.160851 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:19.160860 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160870 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160880 | orchestrator | 2026-04-16 07:23:19.160890 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-04-16 07:23:19.160901 | orchestrator | Thursday 16 April 2026 07:23:15 +0000 (0:00:00.338) 0:05:40.200 ******** 2026-04-16 07:23:19.160910 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:19.160920 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:23:19.160930 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:23:19.160940 | orchestrator | 2026-04-16 07:23:19.160950 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-04-16 07:23:19.160958 | orchestrator | Thursday 16 April 2026 07:23:15 +0000 (0:00:00.342) 0:05:40.543 ******** 2026-04-16 07:23:19.160967 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:23:19.160976 | orchestrator | 2026-04-16 07:23:19.160985 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-04-16 07:23:19.160993 | orchestrator | Thursday 16 April 2026 07:23:17 +0000 (0:00:01.945) 0:05:42.488 ******** 2026-04-16 07:23:19.161013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-04-16 07:23:22.645729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-04-16 07:23:22.645857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-04-16 07:23:22.645874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:23:22.645886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:23:22.645898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-04-16 07:23:22.645926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:23:22.645959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:23:22.645979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-04-16 07:23:22.645992 | orchestrator | 2026-04-16 07:23:22.646006 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-04-16 07:23:22.646088 | orchestrator | Thursday 16 April 2026 07:23:21 +0000 (0:00:03.486) 0:05:45.975 ******** 2026-04-16 07:23:22.646103 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:23:22.646115 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:23:22.646127 | orchestrator | } 2026-04-16 07:23:22.646138 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:23:22.646149 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:23:22.646160 | orchestrator | } 2026-04-16 07:23:22.646171 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:23:22.646182 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:23:22.646193 | orchestrator | } 2026-04-16 07:23:22.646204 | orchestrator | 2026-04-16 07:23:22.646215 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:23:22.646226 | orchestrator | Thursday 16 April 2026 07:23:22 +0000 (0:00:00.783) 0:05:46.759 ******** 2026-04-16 07:23:22.646238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-04-16 07:23:22.646276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:23:22.646289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:23:22.646303 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:23:22.646323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-04-16 07:23:22.646355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:25:09.777416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:25:09.777510 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.777521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-04-16 07:25:09.777530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-04-16 07:25:09.777536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-04-16 07:25:09.777543 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.777550 | orchestrator | 2026-04-16 07:25:09.777557 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-04-16 07:25:09.777565 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 07:25:09.777572 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 07:25:09.777603 | orchestrator | Thursday 16 April 2026 07:23:23 +0000 (0:00:01.701) 0:05:48.461 ******** 2026-04-16 07:25:09.777610 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:09.777618 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:09.777624 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:09.777630 | orchestrator | 2026-04-16 07:25:09.777637 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-04-16 07:25:09.777643 | orchestrator | Thursday 16 April 2026 07:23:24 +0000 (0:00:00.731) 0:05:49.193 ******** 2026-04-16 07:25:09.777649 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:09.777666 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:09.777672 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:09.777678 | orchestrator | 2026-04-16 07:25:09.777685 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-04-16 07:25:09.777691 | orchestrator | Thursday 16 April 2026 07:23:25 +0000 (0:00:00.384) 0:05:49.577 ******** 2026-04-16 07:25:09.777697 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:09.777704 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:25:09.777710 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:25:09.777716 | orchestrator | 2026-04-16 07:25:09.777722 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-04-16 07:25:09.777729 | orchestrator | Thursday 16 April 2026 07:23:31 +0000 (0:00:06.586) 0:05:56.163 ******** 2026-04-16 07:25:09.777735 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:09.777741 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:25:09.777747 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:25:09.777753 | orchestrator | 2026-04-16 07:25:09.777760 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-04-16 07:25:09.777766 | orchestrator | Thursday 16 April 2026 07:23:37 +0000 (0:00:06.045) 0:06:02.208 ******** 2026-04-16 07:25:09.777772 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:09.777778 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:25:09.777785 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:25:09.777791 | orchestrator | 2026-04-16 07:25:09.777809 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-04-16 07:25:09.777816 | orchestrator | Thursday 16 April 2026 07:23:43 +0000 (0:00:06.083) 0:06:08.292 ******** 2026-04-16 07:25:09.777822 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:09.777828 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:25:09.777835 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:25:09.777841 | orchestrator | 2026-04-16 07:25:09.777847 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-04-16 07:25:09.777853 | orchestrator | Thursday 16 April 2026 07:23:50 +0000 (0:00:06.802) 0:06:15.095 ******** 2026-04-16 07:25:09.777859 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:09.777865 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:09.777871 | orchestrator | 2026-04-16 07:25:09.777878 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-04-16 07:25:09.777884 | orchestrator | Thursday 16 April 2026 07:23:54 +0000 (0:00:03.954) 0:06:19.049 ******** 2026-04-16 07:25:09.777890 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:09.777896 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:25:09.777902 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:25:09.777908 | orchestrator | 2026-04-16 07:25:09.777915 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-04-16 07:25:09.777921 | orchestrator | Thursday 16 April 2026 07:24:07 +0000 (0:00:12.782) 0:06:31.831 ******** 2026-04-16 07:25:09.777927 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:09.777933 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:09.777939 | orchestrator | 2026-04-16 07:25:09.777945 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-04-16 07:25:09.777952 | orchestrator | Thursday 16 April 2026 07:24:10 +0000 (0:00:03.720) 0:06:35.552 ******** 2026-04-16 07:25:09.777963 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:09.777969 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:25:09.777976 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:25:09.777982 | orchestrator | 2026-04-16 07:25:09.777988 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-04-16 07:25:09.777994 | orchestrator | Thursday 16 April 2026 07:24:17 +0000 (0:00:06.334) 0:06:41.887 ******** 2026-04-16 07:25:09.778000 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.778007 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.778013 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:25:09.778063 | orchestrator | 2026-04-16 07:25:09.778070 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-04-16 07:25:09.778076 | orchestrator | Thursday 16 April 2026 07:24:23 +0000 (0:00:05.856) 0:06:47.744 ******** 2026-04-16 07:25:09.778082 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.778089 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.778095 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:25:09.778101 | orchestrator | 2026-04-16 07:25:09.778107 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-04-16 07:25:09.778114 | orchestrator | Thursday 16 April 2026 07:24:29 +0000 (0:00:05.872) 0:06:53.616 ******** 2026-04-16 07:25:09.778120 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.778126 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.778207 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:25:09.778217 | orchestrator | 2026-04-16 07:25:09.778224 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-04-16 07:25:09.778230 | orchestrator | Thursday 16 April 2026 07:24:34 +0000 (0:00:05.841) 0:06:59.458 ******** 2026-04-16 07:25:09.778236 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.778242 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.778249 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:25:09.778255 | orchestrator | 2026-04-16 07:25:09.778261 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-04-16 07:25:09.778267 | orchestrator | Thursday 16 April 2026 07:24:41 +0000 (0:00:06.316) 0:07:05.775 ******** 2026-04-16 07:25:09.778273 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:09.778279 | orchestrator | 2026-04-16 07:25:09.778286 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-04-16 07:25:09.778292 | orchestrator | Thursday 16 April 2026 07:24:44 +0000 (0:00:03.417) 0:07:09.192 ******** 2026-04-16 07:25:09.778298 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.778304 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.778311 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:25:09.778317 | orchestrator | 2026-04-16 07:25:09.778323 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-04-16 07:25:09.778329 | orchestrator | Thursday 16 April 2026 07:24:56 +0000 (0:00:12.004) 0:07:21.196 ******** 2026-04-16 07:25:09.778335 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:09.778341 | orchestrator | 2026-04-16 07:25:09.778348 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-04-16 07:25:09.778354 | orchestrator | Thursday 16 April 2026 07:25:01 +0000 (0:00:04.615) 0:07:25.811 ******** 2026-04-16 07:25:09.778365 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:09.778372 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:09.778378 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:25:09.778384 | orchestrator | 2026-04-16 07:25:09.778390 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-04-16 07:25:09.778396 | orchestrator | Thursday 16 April 2026 07:25:07 +0000 (0:00:06.273) 0:07:32.085 ******** 2026-04-16 07:25:09.778402 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:09.778408 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:09.778415 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:09.778421 | orchestrator | 2026-04-16 07:25:09.778427 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-04-16 07:25:09.778438 | orchestrator | Thursday 16 April 2026 07:25:08 +0000 (0:00:00.951) 0:07:33.036 ******** 2026-04-16 07:25:09.778445 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:09.778451 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:09.778457 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:09.778463 | orchestrator | 2026-04-16 07:25:09.778469 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:25:09.778476 | orchestrator | testbed-node-0 : ok=129  changed=30  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-16 07:25:09.778490 | orchestrator | testbed-node-1 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-16 07:25:11.261716 | orchestrator | testbed-node-2 : ok=128  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-04-16 07:25:11.261819 | orchestrator | 2026-04-16 07:25:11.261834 | orchestrator | 2026-04-16 07:25:11.261846 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:25:11.261859 | orchestrator | Thursday 16 April 2026 07:25:10 +0000 (0:00:02.032) 0:07:35.069 ******** 2026-04-16 07:25:11.261870 | orchestrator | =============================================================================== 2026-04-16 07:25:11.261881 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.78s 2026-04-16 07:25:11.261892 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.00s 2026-04-16 07:25:11.261903 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.85s 2026-04-16 07:25:11.261914 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 6.80s 2026-04-16 07:25:11.261925 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 6.59s 2026-04-16 07:25:11.261935 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.33s 2026-04-16 07:25:11.261946 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 6.32s 2026-04-16 07:25:11.261957 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.27s 2026-04-16 07:25:11.261968 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 6.08s 2026-04-16 07:25:11.261978 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.07s 2026-04-16 07:25:11.261989 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 6.05s 2026-04-16 07:25:11.262000 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.97s 2026-04-16 07:25:11.262011 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 5.87s 2026-04-16 07:25:11.262085 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 5.86s 2026-04-16 07:25:11.262097 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 5.84s 2026-04-16 07:25:11.262107 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.99s 2026-04-16 07:25:11.262119 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.66s 2026-04-16 07:25:11.262177 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.64s 2026-04-16 07:25:11.262191 | orchestrator | loadbalancer : Wait for master proxysql to start ------------------------ 4.62s 2026-04-16 07:25:11.262202 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.22s 2026-04-16 07:25:11.446123 | orchestrator | + osism apply -a upgrade opensearch 2026-04-16 07:25:12.719071 | orchestrator | 2026-04-16 07:25:12 | INFO  | Prepare task for execution of opensearch. 2026-04-16 07:25:12.781452 | orchestrator | 2026-04-16 07:25:12 | INFO  | Task 201c6df7-ef6a-4d11-b607-65cd4a4179c6 (opensearch) was prepared for execution. 2026-04-16 07:25:12.781583 | orchestrator | 2026-04-16 07:25:12 | INFO  | It takes a moment until task 201c6df7-ef6a-4d11-b607-65cd4a4179c6 (opensearch) has been started and output is visible here. 2026-04-16 07:25:28.668217 | orchestrator | 2026-04-16 07:25:28.668371 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:25:28.668392 | orchestrator | 2026-04-16 07:25:28.668405 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:25:28.668418 | orchestrator | Thursday 16 April 2026 07:25:17 +0000 (0:00:01.637) 0:00:01.637 ******** 2026-04-16 07:25:28.668429 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:28.668443 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:28.668454 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:28.668464 | orchestrator | 2026-04-16 07:25:28.668476 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:25:28.668488 | orchestrator | Thursday 16 April 2026 07:25:19 +0000 (0:00:01.610) 0:00:03.248 ******** 2026-04-16 07:25:28.668503 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-04-16 07:25:28.668535 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-04-16 07:25:28.668547 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-04-16 07:25:28.668559 | orchestrator | 2026-04-16 07:25:28.668571 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-04-16 07:25:28.668583 | orchestrator | 2026-04-16 07:25:28.668594 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 07:25:28.668606 | orchestrator | Thursday 16 April 2026 07:25:20 +0000 (0:00:01.742) 0:00:04.990 ******** 2026-04-16 07:25:28.668619 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:25:28.668630 | orchestrator | 2026-04-16 07:25:28.668642 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-04-16 07:25:28.668653 | orchestrator | Thursday 16 April 2026 07:25:23 +0000 (0:00:02.867) 0:00:07.858 ******** 2026-04-16 07:25:28.668665 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 07:25:28.668678 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 07:25:28.668690 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-04-16 07:25:28.668703 | orchestrator | 2026-04-16 07:25:28.668715 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-04-16 07:25:28.668728 | orchestrator | Thursday 16 April 2026 07:25:25 +0000 (0:00:02.119) 0:00:09.977 ******** 2026-04-16 07:25:28.668745 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:28.668762 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:28.668825 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:28.668850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:28.668866 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:28.668880 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:28.668900 | orchestrator | 2026-04-16 07:25:28.668913 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 07:25:28.668927 | orchestrator | Thursday 16 April 2026 07:25:28 +0000 (0:00:02.139) 0:00:12.117 ******** 2026-04-16 07:25:28.668940 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:25:28.668953 | orchestrator | 2026-04-16 07:25:28.668973 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-04-16 07:25:33.469665 | orchestrator | Thursday 16 April 2026 07:25:29 +0000 (0:00:01.661) 0:00:13.778 ******** 2026-04-16 07:25:33.469806 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:33.469844 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:33.469864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:33.469913 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:33.469974 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:33.470001 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:33.470107 | orchestrator | 2026-04-16 07:25:33.470178 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-04-16 07:25:33.470200 | orchestrator | Thursday 16 April 2026 07:25:32 +0000 (0:00:03.261) 0:00:17.040 ******** 2026-04-16 07:25:33.470220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:25:33.470270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:25:35.561347 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:35.561505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:25:35.561538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:25:35.561562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:25:35.561614 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:35.561672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:25:35.561688 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:35.561700 | orchestrator | 2026-04-16 07:25:35.561712 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-04-16 07:25:35.561725 | orchestrator | Thursday 16 April 2026 07:25:34 +0000 (0:00:01.853) 0:00:18.893 ******** 2026-04-16 07:25:35.561737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:25:35.561750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:25:35.561769 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:25:35.561781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:25:35.561807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:25:39.159577 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:25:39.159676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:25:39.159691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:25:39.159722 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:25:39.159730 | orchestrator | 2026-04-16 07:25:39.159737 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-04-16 07:25:39.159745 | orchestrator | Thursday 16 April 2026 07:25:36 +0000 (0:00:01.907) 0:00:20.800 ******** 2026-04-16 07:25:39.159751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:39.159784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:39.159792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:39.159805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:39.159813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:39.159829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:52.074016 | orchestrator | 2026-04-16 07:25:52.074302 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-04-16 07:25:52.074333 | orchestrator | Thursday 16 April 2026 07:25:40 +0000 (0:00:03.544) 0:00:24.345 ******** 2026-04-16 07:25:52.074384 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:52.074405 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:52.074425 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:52.074444 | orchestrator | 2026-04-16 07:25:52.074463 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-04-16 07:25:52.074484 | orchestrator | Thursday 16 April 2026 07:25:43 +0000 (0:00:03.425) 0:00:27.771 ******** 2026-04-16 07:25:52.074505 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:25:52.074525 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:25:52.074544 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:25:52.074564 | orchestrator | 2026-04-16 07:25:52.074583 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-04-16 07:25:52.074603 | orchestrator | Thursday 16 April 2026 07:25:46 +0000 (0:00:03.235) 0:00:31.006 ******** 2026-04-16 07:25:52.074626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:52.074652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:52.074692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 07:25:52.074744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:52.074787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:52.074813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-04-16 07:25:52.074835 | orchestrator | 2026-04-16 07:25:52.074854 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-04-16 07:25:52.074875 | orchestrator | Thursday 16 April 2026 07:25:50 +0000 (0:00:03.409) 0:00:34.415 ******** 2026-04-16 07:25:52.074896 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:25:52.074916 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:25:52.074937 | orchestrator | } 2026-04-16 07:25:52.074956 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:25:52.074976 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:25:52.074995 | orchestrator | } 2026-04-16 07:25:52.075014 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:25:52.075034 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:25:52.075054 | orchestrator | } 2026-04-16 07:25:52.075075 | orchestrator | 2026-04-16 07:25:52.075128 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:25:52.075164 | orchestrator | Thursday 16 April 2026 07:25:51 +0000 (0:00:01.349) 0:00:35.765 ******** 2026-04-16 07:25:52.075257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:29:11.624293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:29:11.624595 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:29:11.624617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:29:11.624650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:29:11.624690 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:29:11.624724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 07:29:11.624738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-04-16 07:29:11.624750 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:29:11.624761 | orchestrator | 2026-04-16 07:29:11.624773 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 07:29:11.624789 | orchestrator | Thursday 16 April 2026 07:25:54 +0000 (0:00:02.370) 0:00:38.136 ******** 2026-04-16 07:29:11.624807 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:29:11.624825 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:29:11.624842 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:29:11.624860 | orchestrator | 2026-04-16 07:29:11.624877 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-16 07:29:11.624896 | orchestrator | Thursday 16 April 2026 07:25:55 +0000 (0:00:01.373) 0:00:39.509 ******** 2026-04-16 07:29:11.624913 | orchestrator | 2026-04-16 07:29:11.624931 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-16 07:29:11.625032 | orchestrator | Thursday 16 April 2026 07:25:55 +0000 (0:00:00.454) 0:00:39.964 ******** 2026-04-16 07:29:11.625054 | orchestrator | 2026-04-16 07:29:11.625073 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-04-16 07:29:11.625092 | orchestrator | Thursday 16 April 2026 07:25:56 +0000 (0:00:00.433) 0:00:40.398 ******** 2026-04-16 07:29:11.625110 | orchestrator | 2026-04-16 07:29:11.625123 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-04-16 07:29:11.625149 | orchestrator | Thursday 16 April 2026 07:25:57 +0000 (0:00:00.804) 0:00:41.202 ******** 2026-04-16 07:29:11.625162 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:29:11.625176 | orchestrator | 2026-04-16 07:29:11.625187 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-04-16 07:29:11.625198 | orchestrator | Thursday 16 April 2026 07:26:00 +0000 (0:00:03.608) 0:00:44.811 ******** 2026-04-16 07:29:11.625208 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:29:11.625219 | orchestrator | 2026-04-16 07:29:11.625230 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-04-16 07:29:11.625241 | orchestrator | Thursday 16 April 2026 07:26:09 +0000 (0:00:08.631) 0:00:53.442 ******** 2026-04-16 07:29:11.625252 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:29:11.625263 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:29:11.625273 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:29:11.625284 | orchestrator | 2026-04-16 07:29:11.625295 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-04-16 07:29:11.625313 | orchestrator | Thursday 16 April 2026 07:27:22 +0000 (0:01:12.863) 0:02:06.306 ******** 2026-04-16 07:29:11.625324 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:29:11.625335 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:29:11.625347 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:29:11.625358 | orchestrator | 2026-04-16 07:29:11.625368 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-04-16 07:29:11.625379 | orchestrator | Thursday 16 April 2026 07:28:59 +0000 (0:01:36.925) 0:03:43.232 ******** 2026-04-16 07:29:11.625390 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:29:11.625401 | orchestrator | 2026-04-16 07:29:11.625412 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-04-16 07:29:11.625423 | orchestrator | Thursday 16 April 2026 07:29:01 +0000 (0:00:02.002) 0:03:45.235 ******** 2026-04-16 07:29:11.625434 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:29:11.625444 | orchestrator | 2026-04-16 07:29:11.625455 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-04-16 07:29:11.625466 | orchestrator | Thursday 16 April 2026 07:29:04 +0000 (0:00:03.351) 0:03:48.586 ******** 2026-04-16 07:29:11.625477 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:29:11.625487 | orchestrator | 2026-04-16 07:29:11.625498 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-04-16 07:29:11.625509 | orchestrator | Thursday 16 April 2026 07:29:07 +0000 (0:00:03.407) 0:03:51.994 ******** 2026-04-16 07:29:11.625520 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:29:11.625530 | orchestrator | 2026-04-16 07:29:11.625541 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-04-16 07:29:11.625564 | orchestrator | Thursday 16 April 2026 07:29:11 +0000 (0:00:03.673) 0:03:55.667 ******** 2026-04-16 07:29:14.935774 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:29:14.935856 | orchestrator | 2026-04-16 07:29:14.935863 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-04-16 07:29:14.935869 | orchestrator | Thursday 16 April 2026 07:29:12 +0000 (0:00:01.207) 0:03:56.874 ******** 2026-04-16 07:29:14.935873 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:29:14.935877 | orchestrator | 2026-04-16 07:29:14.935882 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:29:14.935886 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:29:14.935892 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:29:14.935896 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:29:14.935900 | orchestrator | 2026-04-16 07:29:14.935922 | orchestrator | 2026-04-16 07:29:14.935926 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:29:14.935930 | orchestrator | Thursday 16 April 2026 07:29:14 +0000 (0:00:01.752) 0:03:58.627 ******** 2026-04-16 07:29:14.935934 | orchestrator | =============================================================================== 2026-04-16 07:29:14.936002 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 96.93s 2026-04-16 07:29:14.936009 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.86s 2026-04-16 07:29:14.936013 | orchestrator | opensearch : Perform a flush -------------------------------------------- 8.63s 2026-04-16 07:29:14.936016 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.67s 2026-04-16 07:29:14.936020 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.61s 2026-04-16 07:29:14.936024 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.54s 2026-04-16 07:29:14.936028 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.43s 2026-04-16 07:29:14.936032 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.41s 2026-04-16 07:29:14.936036 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 3.41s 2026-04-16 07:29:14.936040 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.35s 2026-04-16 07:29:14.936043 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.26s 2026-04-16 07:29:14.936047 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.24s 2026-04-16 07:29:14.936051 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.87s 2026-04-16 07:29:14.936055 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.37s 2026-04-16 07:29:14.936058 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.14s 2026-04-16 07:29:14.936062 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.12s 2026-04-16 07:29:14.936066 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.00s 2026-04-16 07:29:14.936070 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.91s 2026-04-16 07:29:14.936074 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.85s 2026-04-16 07:29:14.936078 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.75s 2026-04-16 07:29:15.131350 | orchestrator | + osism apply -a upgrade memcached 2026-04-16 07:29:16.408412 | orchestrator | 2026-04-16 07:29:16 | INFO  | Prepare task for execution of memcached. 2026-04-16 07:29:16.473456 | orchestrator | 2026-04-16 07:29:16 | INFO  | Task 155b1574-56c6-4db8-878f-2d682643b57f (memcached) was prepared for execution. 2026-04-16 07:29:16.473569 | orchestrator | 2026-04-16 07:29:16 | INFO  | It takes a moment until task 155b1574-56c6-4db8-878f-2d682643b57f (memcached) has been started and output is visible here. 2026-04-16 07:29:48.818949 | orchestrator | 2026-04-16 07:29:48.819046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:29:48.819058 | orchestrator | 2026-04-16 07:29:48.819066 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:29:48.819074 | orchestrator | Thursday 16 April 2026 07:29:21 +0000 (0:00:01.611) 0:00:01.611 ******** 2026-04-16 07:29:48.819081 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:29:48.819090 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:29:48.819097 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:29:48.819104 | orchestrator | 2026-04-16 07:29:48.819111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:29:48.819119 | orchestrator | Thursday 16 April 2026 07:29:23 +0000 (0:00:01.849) 0:00:03.461 ******** 2026-04-16 07:29:48.819127 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-04-16 07:29:48.819135 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-04-16 07:29:48.819163 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-04-16 07:29:48.819170 | orchestrator | 2026-04-16 07:29:48.819178 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-04-16 07:29:48.819185 | orchestrator | 2026-04-16 07:29:48.819192 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-04-16 07:29:48.819210 | orchestrator | Thursday 16 April 2026 07:29:24 +0000 (0:00:01.515) 0:00:04.976 ******** 2026-04-16 07:29:48.819226 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:29:48.819234 | orchestrator | 2026-04-16 07:29:48.819241 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-04-16 07:29:48.819248 | orchestrator | Thursday 16 April 2026 07:29:27 +0000 (0:00:02.424) 0:00:07.401 ******** 2026-04-16 07:29:48.819255 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-16 07:29:48.819263 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-16 07:29:48.819271 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-16 07:29:48.819278 | orchestrator | 2026-04-16 07:29:48.819285 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-04-16 07:29:48.819292 | orchestrator | Thursday 16 April 2026 07:29:29 +0000 (0:00:02.670) 0:00:10.071 ******** 2026-04-16 07:29:48.819299 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-04-16 07:29:48.819307 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-04-16 07:29:48.819314 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-04-16 07:29:48.819321 | orchestrator | 2026-04-16 07:29:48.819328 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-04-16 07:29:48.819335 | orchestrator | Thursday 16 April 2026 07:29:32 +0000 (0:00:02.981) 0:00:13.053 ******** 2026-04-16 07:29:48.819345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 07:29:48.819356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 07:29:48.819388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-04-16 07:29:48.819404 | orchestrator | 2026-04-16 07:29:48.819411 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-04-16 07:29:48.819418 | orchestrator | Thursday 16 April 2026 07:29:35 +0000 (0:00:02.378) 0:00:15.432 ******** 2026-04-16 07:29:48.819426 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:29:48.819433 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:29:48.819440 | orchestrator | } 2026-04-16 07:29:48.819448 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:29:48.819455 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:29:48.819462 | orchestrator | } 2026-04-16 07:29:48.819469 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:29:48.819477 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:29:48.819485 | orchestrator | } 2026-04-16 07:29:48.819493 | orchestrator | 2026-04-16 07:29:48.819501 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:29:48.819510 | orchestrator | Thursday 16 April 2026 07:29:36 +0000 (0:00:01.361) 0:00:16.793 ******** 2026-04-16 07:29:48.819519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 07:29:48.819528 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:29:48.819537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 07:29:48.819545 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:29:48.819554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-04-16 07:29:48.819563 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:29:48.819571 | orchestrator | 2026-04-16 07:29:48.819579 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-04-16 07:29:48.819587 | orchestrator | Thursday 16 April 2026 07:29:38 +0000 (0:00:01.916) 0:00:18.710 ******** 2026-04-16 07:29:48.819604 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:29:48.819612 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:29:48.819620 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:29:48.819629 | orchestrator | 2026-04-16 07:29:48.819636 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:29:48.819646 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:29:48.819656 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:29:48.819668 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:29:48.819676 | orchestrator | 2026-04-16 07:29:48.819684 | orchestrator | 2026-04-16 07:29:48.819693 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:29:48.819706 | orchestrator | Thursday 16 April 2026 07:29:48 +0000 (0:00:10.462) 0:00:29.172 ******** 2026-04-16 07:29:49.153912 | orchestrator | =============================================================================== 2026-04-16 07:29:49.154111 | orchestrator | memcached : Restart memcached container -------------------------------- 10.46s 2026-04-16 07:29:49.154130 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.98s 2026-04-16 07:29:49.154142 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.67s 2026-04-16 07:29:49.154153 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.42s 2026-04-16 07:29:49.154164 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.38s 2026-04-16 07:29:49.154175 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.92s 2026-04-16 07:29:49.154186 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.85s 2026-04-16 07:29:49.154197 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.52s 2026-04-16 07:29:49.154208 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.36s 2026-04-16 07:29:49.329786 | orchestrator | + osism apply -a upgrade redis 2026-04-16 07:29:50.562367 | orchestrator | 2026-04-16 07:29:50 | INFO  | Prepare task for execution of redis. 2026-04-16 07:29:50.625713 | orchestrator | 2026-04-16 07:29:50 | INFO  | Task d1637651-2ffb-41fa-aa9f-26d7e2bbab91 (redis) was prepared for execution. 2026-04-16 07:29:50.625836 | orchestrator | 2026-04-16 07:29:50 | INFO  | It takes a moment until task d1637651-2ffb-41fa-aa9f-26d7e2bbab91 (redis) has been started and output is visible here. 2026-04-16 07:30:06.604821 | orchestrator | 2026-04-16 07:30:06.604984 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:30:06.605002 | orchestrator | 2026-04-16 07:30:06.605014 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:30:06.605025 | orchestrator | Thursday 16 April 2026 07:29:55 +0000 (0:00:01.688) 0:00:01.688 ******** 2026-04-16 07:30:06.605037 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:30:06.605049 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:30:06.605060 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:30:06.605071 | orchestrator | 2026-04-16 07:30:06.605082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:30:06.605093 | orchestrator | Thursday 16 April 2026 07:29:57 +0000 (0:00:01.659) 0:00:03.348 ******** 2026-04-16 07:30:06.605104 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-04-16 07:30:06.605115 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-04-16 07:30:06.605126 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-04-16 07:30:06.605137 | orchestrator | 2026-04-16 07:30:06.605147 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-04-16 07:30:06.605184 | orchestrator | 2026-04-16 07:30:06.605195 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-04-16 07:30:06.605206 | orchestrator | Thursday 16 April 2026 07:29:59 +0000 (0:00:01.950) 0:00:05.298 ******** 2026-04-16 07:30:06.605217 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:30:06.605229 | orchestrator | 2026-04-16 07:30:06.605250 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-04-16 07:30:06.605276 | orchestrator | Thursday 16 April 2026 07:30:02 +0000 (0:00:03.111) 0:00:08.409 ******** 2026-04-16 07:30:06.605303 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605331 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605370 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605392 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605439 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605464 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605500 | orchestrator | 2026-04-16 07:30:06.605520 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-04-16 07:30:06.605540 | orchestrator | Thursday 16 April 2026 07:30:04 +0000 (0:00:02.551) 0:00:10.961 ******** 2026-04-16 07:30:06.605559 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605572 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605590 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605602 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:06.605622 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574405 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574549 | orchestrator | 2026-04-16 07:30:13.574567 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-04-16 07:30:13.574579 | orchestrator | Thursday 16 April 2026 07:30:07 +0000 (0:00:02.911) 0:00:13.872 ******** 2026-04-16 07:30:13.574591 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574603 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574630 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574641 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574652 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574688 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574700 | orchestrator | 2026-04-16 07:30:13.574710 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-04-16 07:30:13.574720 | orchestrator | Thursday 16 April 2026 07:30:11 +0000 (0:00:03.920) 0:00:17.793 ******** 2026-04-16 07:30:13.574730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:13.574807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-04-16 07:30:42.282774 | orchestrator | 2026-04-16 07:30:42.282969 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-04-16 07:30:42.282990 | orchestrator | Thursday 16 April 2026 07:30:14 +0000 (0:00:03.113) 0:00:20.906 ******** 2026-04-16 07:30:42.283005 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:30:42.283017 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:30:42.283028 | orchestrator | } 2026-04-16 07:30:42.283040 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:30:42.283051 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:30:42.283062 | orchestrator | } 2026-04-16 07:30:42.283073 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:30:42.283084 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:30:42.283095 | orchestrator | } 2026-04-16 07:30:42.283106 | orchestrator | 2026-04-16 07:30:42.283118 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:30:42.283129 | orchestrator | Thursday 16 April 2026 07:30:16 +0000 (0:00:01.822) 0:00:22.728 ******** 2026-04-16 07:30:42.283143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-16 07:30:42.283159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-16 07:30:42.283188 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:30:42.283200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-16 07:30:42.283236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-16 07:30:42.283249 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:30:42.283260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-04-16 07:30:42.283293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-04-16 07:30:42.283308 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:30:42.283320 | orchestrator | 2026-04-16 07:30:42.283333 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-16 07:30:42.283346 | orchestrator | Thursday 16 April 2026 07:30:18 +0000 (0:00:01.902) 0:00:24.631 ******** 2026-04-16 07:30:42.283358 | orchestrator | 2026-04-16 07:30:42.283371 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-16 07:30:42.283383 | orchestrator | Thursday 16 April 2026 07:30:18 +0000 (0:00:00.452) 0:00:25.083 ******** 2026-04-16 07:30:42.283395 | orchestrator | 2026-04-16 07:30:42.283408 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-04-16 07:30:42.283420 | orchestrator | Thursday 16 April 2026 07:30:19 +0000 (0:00:00.443) 0:00:25.527 ******** 2026-04-16 07:30:42.283432 | orchestrator | 2026-04-16 07:30:42.283449 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-04-16 07:30:42.283469 | orchestrator | Thursday 16 April 2026 07:30:20 +0000 (0:00:00.798) 0:00:26.326 ******** 2026-04-16 07:30:42.283489 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:30:42.283509 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:30:42.283528 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:30:42.283547 | orchestrator | 2026-04-16 07:30:42.283567 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-04-16 07:30:42.283587 | orchestrator | Thursday 16 April 2026 07:30:30 +0000 (0:00:10.659) 0:00:36.985 ******** 2026-04-16 07:30:42.283609 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:30:42.283629 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:30:42.283649 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:30:42.283669 | orchestrator | 2026-04-16 07:30:42.283711 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:30:42.283746 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:30:42.283777 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:30:42.283797 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:30:42.283817 | orchestrator | 2026-04-16 07:30:42.283836 | orchestrator | 2026-04-16 07:30:42.283855 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:30:42.283873 | orchestrator | Thursday 16 April 2026 07:30:41 +0000 (0:00:11.202) 0:00:48.188 ******** 2026-04-16 07:30:42.283939 | orchestrator | =============================================================================== 2026-04-16 07:30:42.283950 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.20s 2026-04-16 07:30:42.283961 | orchestrator | redis : Restart redis container ---------------------------------------- 10.66s 2026-04-16 07:30:42.283972 | orchestrator | redis : Copying over redis config files --------------------------------- 3.92s 2026-04-16 07:30:42.283983 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.11s 2026-04-16 07:30:42.283993 | orchestrator | redis : include_tasks --------------------------------------------------- 3.11s 2026-04-16 07:30:42.284004 | orchestrator | redis : Copying over default config.json files -------------------------- 2.91s 2026-04-16 07:30:42.284015 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.55s 2026-04-16 07:30:42.284025 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.95s 2026-04-16 07:30:42.284036 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.90s 2026-04-16 07:30:42.284047 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.82s 2026-04-16 07:30:42.284057 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.69s 2026-04-16 07:30:42.284068 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.66s 2026-04-16 07:30:42.452867 | orchestrator | + osism apply -a upgrade mariadb 2026-04-16 07:30:43.789591 | orchestrator | 2026-04-16 07:30:43 | INFO  | Prepare task for execution of mariadb. 2026-04-16 07:30:43.851784 | orchestrator | 2026-04-16 07:30:43 | INFO  | Task 4eedae56-4232-4b69-a1a6-e90453532bd6 (mariadb) was prepared for execution. 2026-04-16 07:30:43.851858 | orchestrator | 2026-04-16 07:30:43 | INFO  | It takes a moment until task 4eedae56-4232-4b69-a1a6-e90453532bd6 (mariadb) has been started and output is visible here. 2026-04-16 07:31:09.829983 | orchestrator | 2026-04-16 07:31:09.830118 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:31:09.830129 | orchestrator | 2026-04-16 07:31:09.830134 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:31:09.830140 | orchestrator | Thursday 16 April 2026 07:30:48 +0000 (0:00:01.531) 0:00:01.531 ******** 2026-04-16 07:31:09.830145 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:31:09.830150 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:31:09.830155 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:31:09.830160 | orchestrator | 2026-04-16 07:31:09.830164 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:31:09.830170 | orchestrator | Thursday 16 April 2026 07:30:50 +0000 (0:00:01.988) 0:00:03.520 ******** 2026-04-16 07:31:09.830175 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-16 07:31:09.830180 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-16 07:31:09.830185 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-16 07:31:09.830190 | orchestrator | 2026-04-16 07:31:09.830194 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-16 07:31:09.830199 | orchestrator | 2026-04-16 07:31:09.830222 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-16 07:31:09.830227 | orchestrator | Thursday 16 April 2026 07:30:52 +0000 (0:00:01.665) 0:00:05.186 ******** 2026-04-16 07:31:09.830232 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:31:09.830237 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 07:31:09.830242 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 07:31:09.830246 | orchestrator | 2026-04-16 07:31:09.830251 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 07:31:09.830255 | orchestrator | Thursday 16 April 2026 07:30:53 +0000 (0:00:01.443) 0:00:06.629 ******** 2026-04-16 07:31:09.830261 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:31:09.830266 | orchestrator | 2026-04-16 07:31:09.830271 | orchestrator | TASK [mariadb : Remove mariadb-clustercheck] *********************************** 2026-04-16 07:31:09.830276 | orchestrator | Thursday 16 April 2026 07:30:55 +0000 (0:00:02.004) 0:00:08.634 ******** 2026-04-16 07:31:09.830280 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:31:09.830285 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:31:09.830290 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:31:09.830294 | orchestrator | 2026-04-16 07:31:09.830299 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-04-16 07:31:09.830303 | orchestrator | Thursday 16 April 2026 07:30:58 +0000 (0:00:02.872) 0:00:11.507 ******** 2026-04-16 07:31:09.830321 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:09.830343 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:09.830357 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:09.830362 | orchestrator | 2026-04-16 07:31:09.830367 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-04-16 07:31:09.830371 | orchestrator | Thursday 16 April 2026 07:31:02 +0000 (0:00:03.985) 0:00:15.492 ******** 2026-04-16 07:31:09.830376 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:09.830381 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:31:09.830386 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:09.830391 | orchestrator | 2026-04-16 07:31:09.830395 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-04-16 07:31:09.830400 | orchestrator | Thursday 16 April 2026 07:31:04 +0000 (0:00:01.663) 0:00:17.156 ******** 2026-04-16 07:31:09.830407 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:09.830415 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:09.830427 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:31:09.830435 | orchestrator | 2026-04-16 07:31:09.830442 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-04-16 07:31:09.830450 | orchestrator | Thursday 16 April 2026 07:31:06 +0000 (0:00:02.261) 0:00:19.417 ******** 2026-04-16 07:31:09.830470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:21.779201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:21.779322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:21.779361 | orchestrator | 2026-04-16 07:31:21.779376 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-04-16 07:31:21.779389 | orchestrator | Thursday 16 April 2026 07:31:10 +0000 (0:00:04.376) 0:00:23.794 ******** 2026-04-16 07:31:21.779401 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:21.779412 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:21.779423 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:31:21.779436 | orchestrator | 2026-04-16 07:31:21.779448 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-04-16 07:31:21.779476 | orchestrator | Thursday 16 April 2026 07:31:12 +0000 (0:00:01.984) 0:00:25.779 ******** 2026-04-16 07:31:21.779488 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:31:21.779499 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:31:21.779509 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:31:21.779520 | orchestrator | 2026-04-16 07:31:21.779531 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 07:31:21.779542 | orchestrator | Thursday 16 April 2026 07:31:17 +0000 (0:00:04.773) 0:00:30.552 ******** 2026-04-16 07:31:21.779554 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:31:21.779565 | orchestrator | 2026-04-16 07:31:21.779576 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-04-16 07:31:21.779587 | orchestrator | Thursday 16 April 2026 07:31:19 +0000 (0:00:01.845) 0:00:32.397 ******** 2026-04-16 07:31:21.779605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:21.779626 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:21.779646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:28.477378 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:28.477483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:28.477512 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:28.477519 | orchestrator | 2026-04-16 07:31:28.477527 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-04-16 07:31:28.477534 | orchestrator | Thursday 16 April 2026 07:31:22 +0000 (0:00:03.470) 0:00:35.868 ******** 2026-04-16 07:31:28.477542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:28.477549 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:28.477575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:28.477594 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:28.477606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:28.477616 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:28.477625 | orchestrator | 2026-04-16 07:31:28.477636 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-04-16 07:31:28.477645 | orchestrator | Thursday 16 April 2026 07:31:26 +0000 (0:00:03.413) 0:00:39.281 ******** 2026-04-16 07:31:28.477668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:33.559437 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:33.559561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:33.559582 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:33.559612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:33.560434 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:33.560455 | orchestrator | 2026-04-16 07:31:33.560469 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-04-16 07:31:33.560485 | orchestrator | Thursday 16 April 2026 07:31:30 +0000 (0:00:03.849) 0:00:43.131 ******** 2026-04-16 07:31:33.560521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:33.560543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:33.560572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-04-16 07:31:48.678436 | orchestrator | 2026-04-16 07:31:48.678560 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-04-16 07:31:48.678584 | orchestrator | Thursday 16 April 2026 07:31:34 +0000 (0:00:04.402) 0:00:47.533 ******** 2026-04-16 07:31:48.678600 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:31:48.678614 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:31:48.678627 | orchestrator | } 2026-04-16 07:31:48.678639 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:31:48.678653 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:31:48.678665 | orchestrator | } 2026-04-16 07:31:48.678677 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:31:48.678691 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:31:48.678705 | orchestrator | } 2026-04-16 07:31:48.678719 | orchestrator | 2026-04-16 07:31:48.678734 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:31:48.678747 | orchestrator | Thursday 16 April 2026 07:31:36 +0000 (0:00:01.486) 0:00:49.020 ******** 2026-04-16 07:31:48.678784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:48.678865 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.678908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:48.678927 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.678951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:31:48.678979 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.678994 | orchestrator | 2026-04-16 07:31:48.679009 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-04-16 07:31:48.679024 | orchestrator | Thursday 16 April 2026 07:31:39 +0000 (0:00:03.775) 0:00:52.795 ******** 2026-04-16 07:31:48.679039 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679054 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.679069 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.679083 | orchestrator | 2026-04-16 07:31:48.679098 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-04-16 07:31:48.679113 | orchestrator | Thursday 16 April 2026 07:31:41 +0000 (0:00:01.570) 0:00:54.366 ******** 2026-04-16 07:31:48.679129 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679145 | orchestrator | 2026-04-16 07:31:48.679160 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-04-16 07:31:48.679174 | orchestrator | Thursday 16 April 2026 07:31:42 +0000 (0:00:01.089) 0:00:55.456 ******** 2026-04-16 07:31:48.679188 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679203 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.679217 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.679231 | orchestrator | 2026-04-16 07:31:48.679246 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-04-16 07:31:48.679261 | orchestrator | Thursday 16 April 2026 07:31:43 +0000 (0:00:01.360) 0:00:56.816 ******** 2026-04-16 07:31:48.679277 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679292 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.679307 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.679319 | orchestrator | 2026-04-16 07:31:48.679334 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-04-16 07:31:48.679348 | orchestrator | Thursday 16 April 2026 07:31:45 +0000 (0:00:01.416) 0:00:58.233 ******** 2026-04-16 07:31:48.679363 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679377 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.679392 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.679406 | orchestrator | 2026-04-16 07:31:48.679419 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-04-16 07:31:48.679434 | orchestrator | Thursday 16 April 2026 07:31:46 +0000 (0:00:01.528) 0:00:59.761 ******** 2026-04-16 07:31:48.679449 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679462 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.679478 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.679493 | orchestrator | 2026-04-16 07:31:48.679507 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-04-16 07:31:48.679521 | orchestrator | Thursday 16 April 2026 07:31:48 +0000 (0:00:01.424) 0:01:01.186 ******** 2026-04-16 07:31:48.679535 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:31:48.679550 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:31:48.679565 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:31:48.679579 | orchestrator | 2026-04-16 07:31:48.679615 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-04-16 07:32:05.969402 | orchestrator | Thursday 16 April 2026 07:31:49 +0000 (0:00:01.349) 0:01:02.536 ******** 2026-04-16 07:32:05.969518 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.969534 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.969546 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.969557 | orchestrator | 2026-04-16 07:32:05.969569 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-04-16 07:32:05.969581 | orchestrator | Thursday 16 April 2026 07:31:50 +0000 (0:00:01.317) 0:01:03.854 ******** 2026-04-16 07:32:05.969592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 07:32:05.969603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 07:32:05.969614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 07:32:05.969625 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.969637 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 07:32:05.969648 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 07:32:05.969679 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 07:32:05.969701 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.969713 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 07:32:05.969724 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 07:32:05.969734 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 07:32:05.969745 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.969756 | orchestrator | 2026-04-16 07:32:05.969768 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-04-16 07:32:05.969779 | orchestrator | Thursday 16 April 2026 07:31:52 +0000 (0:00:01.607) 0:01:05.462 ******** 2026-04-16 07:32:05.969789 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.969800 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.969831 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.969843 | orchestrator | 2026-04-16 07:32:05.969854 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-04-16 07:32:05.969865 | orchestrator | Thursday 16 April 2026 07:31:53 +0000 (0:00:01.357) 0:01:06.819 ******** 2026-04-16 07:32:05.969876 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.969886 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.969897 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.969908 | orchestrator | 2026-04-16 07:32:05.969935 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-04-16 07:32:05.969946 | orchestrator | Thursday 16 April 2026 07:31:55 +0000 (0:00:01.386) 0:01:08.206 ******** 2026-04-16 07:32:05.969959 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.969973 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.969985 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.969997 | orchestrator | 2026-04-16 07:32:05.970010 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-04-16 07:32:05.970056 | orchestrator | Thursday 16 April 2026 07:31:56 +0000 (0:00:01.337) 0:01:09.544 ******** 2026-04-16 07:32:05.970069 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.970082 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.970093 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.970104 | orchestrator | 2026-04-16 07:32:05.970117 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-04-16 07:32:05.970128 | orchestrator | Thursday 16 April 2026 07:31:57 +0000 (0:00:01.336) 0:01:10.881 ******** 2026-04-16 07:32:05.970139 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.970158 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.970169 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.970180 | orchestrator | 2026-04-16 07:32:05.970191 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-04-16 07:32:05.970230 | orchestrator | Thursday 16 April 2026 07:31:59 +0000 (0:00:01.374) 0:01:12.256 ******** 2026-04-16 07:32:05.970242 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.970252 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.970263 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.970274 | orchestrator | 2026-04-16 07:32:05.970284 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-04-16 07:32:05.970295 | orchestrator | Thursday 16 April 2026 07:32:00 +0000 (0:00:01.422) 0:01:13.679 ******** 2026-04-16 07:32:05.970306 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.970317 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.970327 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.970338 | orchestrator | 2026-04-16 07:32:05.970349 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-04-16 07:32:05.970360 | orchestrator | Thursday 16 April 2026 07:32:02 +0000 (0:00:01.613) 0:01:15.293 ******** 2026-04-16 07:32:05.970371 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.970381 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.970392 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:05.970403 | orchestrator | 2026-04-16 07:32:05.970414 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-04-16 07:32:05.970424 | orchestrator | Thursday 16 April 2026 07:32:03 +0000 (0:00:01.337) 0:01:16.630 ******** 2026-04-16 07:32:05.970465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:32:05.970482 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:05.970499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:32:05.970520 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:05.970542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:32:23.352653 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.352788 | orchestrator | 2026-04-16 07:32:23.352840 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-04-16 07:32:23.352859 | orchestrator | Thursday 16 April 2026 07:32:07 +0000 (0:00:03.335) 0:01:19.965 ******** 2026-04-16 07:32:23.352876 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.352893 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.352908 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.352925 | orchestrator | 2026-04-16 07:32:23.352942 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-04-16 07:32:23.352959 | orchestrator | Thursday 16 April 2026 07:32:08 +0000 (0:00:01.361) 0:01:21.327 ******** 2026-04-16 07:32:23.353001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:32:23.353051 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.353094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:32:23.353113 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.353138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-04-16 07:32:23.353167 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.353184 | orchestrator | 2026-04-16 07:32:23.353200 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-04-16 07:32:23.353215 | orchestrator | Thursday 16 April 2026 07:32:11 +0000 (0:00:03.401) 0:01:24.728 ******** 2026-04-16 07:32:23.353233 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.353250 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.353266 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.353282 | orchestrator | 2026-04-16 07:32:23.353297 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-16 07:32:23.353313 | orchestrator | Thursday 16 April 2026 07:32:13 +0000 (0:00:01.706) 0:01:26.434 ******** 2026-04-16 07:32:23.353328 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.353343 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.353358 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.353374 | orchestrator | 2026-04-16 07:32:23.353390 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-16 07:32:23.353406 | orchestrator | Thursday 16 April 2026 07:32:14 +0000 (0:00:01.329) 0:01:27.764 ******** 2026-04-16 07:32:23.353423 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.353439 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.353455 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.353470 | orchestrator | 2026-04-16 07:32:23.353486 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-16 07:32:23.353501 | orchestrator | Thursday 16 April 2026 07:32:16 +0000 (0:00:01.382) 0:01:29.147 ******** 2026-04-16 07:32:23.353514 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.353528 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.353542 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.353557 | orchestrator | 2026-04-16 07:32:23.353570 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-16 07:32:23.353586 | orchestrator | Thursday 16 April 2026 07:32:17 +0000 (0:00:01.715) 0:01:30.862 ******** 2026-04-16 07:32:23.353601 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:32:23.353615 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:32:23.353630 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:32:23.353645 | orchestrator | 2026-04-16 07:32:23.353660 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-04-16 07:32:23.353676 | orchestrator | Thursday 16 April 2026 07:32:19 +0000 (0:00:01.712) 0:01:32.575 ******** 2026-04-16 07:32:23.353693 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:32:23.353724 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:32:23.353741 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:32:23.353757 | orchestrator | 2026-04-16 07:32:23.353773 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-04-16 07:32:23.353789 | orchestrator | Thursday 16 April 2026 07:32:21 +0000 (0:00:02.161) 0:01:34.737 ******** 2026-04-16 07:32:23.353834 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:32:23.353852 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:32:23.353868 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:32:23.353883 | orchestrator | 2026-04-16 07:32:23.353898 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-04-16 07:32:23.353914 | orchestrator | Thursday 16 April 2026 07:32:23 +0000 (0:00:01.382) 0:01:36.120 ******** 2026-04-16 07:32:23.353947 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.421106 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.421244 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.421262 | orchestrator | 2026-04-16 07:34:57.421275 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-04-16 07:34:57.421288 | orchestrator | Thursday 16 April 2026 07:32:24 +0000 (0:00:01.344) 0:01:37.464 ******** 2026-04-16 07:34:57.421299 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.421310 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.421321 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.421331 | orchestrator | 2026-04-16 07:34:57.421343 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-04-16 07:34:57.421355 | orchestrator | Thursday 16 April 2026 07:32:26 +0000 (0:00:01.847) 0:01:39.311 ******** 2026-04-16 07:34:57.421365 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.421376 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.421387 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.421398 | orchestrator | 2026-04-16 07:34:57.421408 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-04-16 07:34:57.421419 | orchestrator | Thursday 16 April 2026 07:32:27 +0000 (0:00:01.560) 0:01:40.872 ******** 2026-04-16 07:34:57.421431 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.421442 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.421453 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.421463 | orchestrator | 2026-04-16 07:34:57.421474 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-04-16 07:34:57.421485 | orchestrator | Thursday 16 April 2026 07:32:29 +0000 (0:00:01.398) 0:01:42.271 ******** 2026-04-16 07:34:57.421496 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.421507 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.421517 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.421528 | orchestrator | 2026-04-16 07:34:57.421539 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-04-16 07:34:57.421549 | orchestrator | Thursday 16 April 2026 07:32:32 +0000 (0:00:03.450) 0:01:45.721 ******** 2026-04-16 07:34:57.421560 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.421571 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.421581 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.421592 | orchestrator | 2026-04-16 07:34:57.421605 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-04-16 07:34:57.421618 | orchestrator | Thursday 16 April 2026 07:32:34 +0000 (0:00:01.400) 0:01:47.121 ******** 2026-04-16 07:34:57.421630 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.421642 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.421654 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.421667 | orchestrator | 2026-04-16 07:34:57.421679 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-04-16 07:34:57.421733 | orchestrator | Thursday 16 April 2026 07:32:35 +0000 (0:00:01.343) 0:01:48.465 ******** 2026-04-16 07:34:57.421747 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.421760 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.421772 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.421784 | orchestrator | 2026-04-16 07:34:57.421825 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 07:34:57.421838 | orchestrator | Thursday 16 April 2026 07:32:37 +0000 (0:00:01.666) 0:01:50.131 ******** 2026-04-16 07:34:57.421851 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.421862 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.421873 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.421884 | orchestrator | 2026-04-16 07:34:57.421895 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 07:34:57.421906 | orchestrator | Thursday 16 April 2026 07:32:38 +0000 (0:00:01.450) 0:01:51.582 ******** 2026-04-16 07:34:57.421916 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.421927 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.421942 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.421962 | orchestrator | 2026-04-16 07:34:57.421981 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-04-16 07:34:57.422000 | orchestrator | Thursday 16 April 2026 07:32:40 +0000 (0:00:01.756) 0:01:53.338 ******** 2026-04-16 07:34:57.422099 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:34:57.422124 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:34:57.422181 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:34:57.422201 | orchestrator | 2026-04-16 07:34:57.422216 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-04-16 07:34:57.422227 | orchestrator | Thursday 16 April 2026 07:32:41 +0000 (0:00:01.364) 0:01:54.703 ******** 2026-04-16 07:34:57.422238 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.422249 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.422259 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.422270 | orchestrator | 2026-04-16 07:34:57.422280 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-16 07:34:57.422298 | orchestrator | 2026-04-16 07:34:57.422317 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-16 07:34:57.422335 | orchestrator | Thursday 16 April 2026 07:32:43 +0000 (0:00:02.001) 0:01:56.705 ******** 2026-04-16 07:34:57.422353 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:34:57.422372 | orchestrator | 2026-04-16 07:34:57.422386 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-16 07:34:57.422401 | orchestrator | Thursday 16 April 2026 07:33:09 +0000 (0:00:25.882) 0:02:22.587 ******** 2026-04-16 07:34:57.422418 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.422437 | orchestrator | 2026-04-16 07:34:57.422454 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-16 07:34:57.422472 | orchestrator | Thursday 16 April 2026 07:33:15 +0000 (0:00:05.635) 0:02:28.222 ******** 2026-04-16 07:34:57.422490 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.422510 | orchestrator | 2026-04-16 07:34:57.422530 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-16 07:34:57.422548 | orchestrator | 2026-04-16 07:34:57.422567 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-16 07:34:57.422586 | orchestrator | Thursday 16 April 2026 07:33:18 +0000 (0:00:03.211) 0:02:31.434 ******** 2026-04-16 07:34:57.422605 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:34:57.422623 | orchestrator | 2026-04-16 07:34:57.422642 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-16 07:34:57.422719 | orchestrator | Thursday 16 April 2026 07:33:42 +0000 (0:00:24.191) 0:02:55.625 ******** 2026-04-16 07:34:57.422734 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.422745 | orchestrator | 2026-04-16 07:34:57.422756 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-16 07:34:57.422767 | orchestrator | Thursday 16 April 2026 07:33:49 +0000 (0:00:07.208) 0:03:02.834 ******** 2026-04-16 07:34:57.422778 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.422789 | orchestrator | 2026-04-16 07:34:57.422799 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-16 07:34:57.422810 | orchestrator | 2026-04-16 07:34:57.422837 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-04-16 07:34:57.422923 | orchestrator | Thursday 16 April 2026 07:33:52 +0000 (0:00:02.929) 0:03:05.763 ******** 2026-04-16 07:34:57.422947 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:34:57.422966 | orchestrator | 2026-04-16 07:34:57.422985 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-04-16 07:34:57.423004 | orchestrator | Thursday 16 April 2026 07:34:16 +0000 (0:00:23.743) 0:03:29.507 ******** 2026-04-16 07:34:57.423029 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-04-16 07:34:57.423047 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.423059 | orchestrator | 2026-04-16 07:34:57.423070 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-04-16 07:34:57.423080 | orchestrator | Thursday 16 April 2026 07:34:24 +0000 (0:00:08.046) 0:03:37.553 ******** 2026-04-16 07:34:57.423091 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.423102 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-16 07:34:57.423113 | orchestrator | 2026-04-16 07:34:57.423123 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-16 07:34:57.423134 | orchestrator | skipping: no hosts matched 2026-04-16 07:34:57.423145 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-16 07:34:57.423156 | orchestrator | mariadb_bootstrap_restart 2026-04-16 07:34:57.423167 | orchestrator | 2026-04-16 07:34:57.423177 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-16 07:34:57.423188 | orchestrator | skipping: no hosts matched 2026-04-16 07:34:57.423199 | orchestrator | 2026-04-16 07:34:57.423209 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-16 07:34:57.423220 | orchestrator | 2026-04-16 07:34:57.423230 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-16 07:34:57.423241 | orchestrator | Thursday 16 April 2026 07:34:28 +0000 (0:00:04.002) 0:03:41.556 ******** 2026-04-16 07:34:57.423252 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:34:57.423263 | orchestrator | 2026-04-16 07:34:57.423273 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-04-16 07:34:57.423284 | orchestrator | Thursday 16 April 2026 07:34:30 +0000 (0:00:01.751) 0:03:43.307 ******** 2026-04-16 07:34:57.423295 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.423306 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.423316 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.423327 | orchestrator | 2026-04-16 07:34:57.423338 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-04-16 07:34:57.423348 | orchestrator | Thursday 16 April 2026 07:34:33 +0000 (0:00:03.154) 0:03:46.462 ******** 2026-04-16 07:34:57.423359 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.423369 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.423380 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:34:57.423391 | orchestrator | 2026-04-16 07:34:57.423401 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-04-16 07:34:57.423412 | orchestrator | Thursday 16 April 2026 07:34:36 +0000 (0:00:03.317) 0:03:49.780 ******** 2026-04-16 07:34:57.423423 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.423433 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.423444 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.423455 | orchestrator | 2026-04-16 07:34:57.423465 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-04-16 07:34:57.423476 | orchestrator | Thursday 16 April 2026 07:34:40 +0000 (0:00:03.153) 0:03:52.934 ******** 2026-04-16 07:34:57.423487 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.423497 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.423508 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:34:57.423518 | orchestrator | 2026-04-16 07:34:57.423529 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-04-16 07:34:57.423551 | orchestrator | Thursday 16 April 2026 07:34:43 +0000 (0:00:03.352) 0:03:56.286 ******** 2026-04-16 07:34:57.423562 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.423572 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.423583 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.423594 | orchestrator | 2026-04-16 07:34:57.423604 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-04-16 07:34:57.423615 | orchestrator | Thursday 16 April 2026 07:34:49 +0000 (0:00:06.150) 0:04:02.437 ******** 2026-04-16 07:34:57.423626 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.423637 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.423647 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.423658 | orchestrator | 2026-04-16 07:34:57.423668 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-04-16 07:34:57.423679 | orchestrator | Thursday 16 April 2026 07:34:52 +0000 (0:00:03.031) 0:04:05.468 ******** 2026-04-16 07:34:57.423742 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:34:57.423756 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:34:57.423767 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:34:57.423779 | orchestrator | 2026-04-16 07:34:57.423790 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-04-16 07:34:57.423801 | orchestrator | Thursday 16 April 2026 07:34:53 +0000 (0:00:01.311) 0:04:06.779 ******** 2026-04-16 07:34:57.423821 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:34:57.423839 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:34:57.423869 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:34:57.423888 | orchestrator | 2026-04-16 07:34:57.423905 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-16 07:34:57.423937 | orchestrator | Thursday 16 April 2026 07:34:57 +0000 (0:00:03.521) 0:04:10.301 ******** 2026-04-16 07:35:17.633146 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:35:17.633284 | orchestrator | 2026-04-16 07:35:17.633325 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-04-16 07:35:17.633336 | orchestrator | Thursday 16 April 2026 07:34:59 +0000 (0:00:01.668) 0:04:11.970 ******** 2026-04-16 07:35:17.633355 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:35:17.633366 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:35:17.633376 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:35:17.633386 | orchestrator | 2026-04-16 07:35:17.633396 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:35:17.633407 | orchestrator | testbed-node-0 : ok=35  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-04-16 07:35:17.633436 | orchestrator | testbed-node-1 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-16 07:35:17.633446 | orchestrator | testbed-node-2 : ok=27  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-16 07:35:17.633456 | orchestrator | 2026-04-16 07:35:17.633465 | orchestrator | 2026-04-16 07:35:17.633475 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:35:17.633484 | orchestrator | Thursday 16 April 2026 07:35:17 +0000 (0:00:18.200) 0:04:30.170 ******** 2026-04-16 07:35:17.633494 | orchestrator | =============================================================================== 2026-04-16 07:35:17.633503 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 73.82s 2026-04-16 07:35:17.633513 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 20.89s 2026-04-16 07:35:17.633523 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.20s 2026-04-16 07:35:17.633540 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.14s 2026-04-16 07:35:17.633556 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.15s 2026-04-16 07:35:17.633602 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.77s 2026-04-16 07:35:17.633619 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.41s 2026-04-16 07:35:17.633631 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.38s 2026-04-16 07:35:17.633641 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.99s 2026-04-16 07:35:17.633652 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.85s 2026-04-16 07:35:17.633663 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.78s 2026-04-16 07:35:17.633674 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.52s 2026-04-16 07:35:17.633710 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.47s 2026-04-16 07:35:17.633721 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.45s 2026-04-16 07:35:17.633734 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.41s 2026-04-16 07:35:17.633753 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.40s 2026-04-16 07:35:17.633769 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.35s 2026-04-16 07:35:17.633785 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.34s 2026-04-16 07:35:17.633800 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.32s 2026-04-16 07:35:17.633813 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 3.15s 2026-04-16 07:35:17.807624 | orchestrator | + osism apply -a upgrade rabbitmq 2026-04-16 07:35:19.113654 | orchestrator | 2026-04-16 07:35:19 | INFO  | Prepare task for execution of rabbitmq. 2026-04-16 07:35:19.177014 | orchestrator | 2026-04-16 07:35:19 | INFO  | Task 1558897b-5dbb-489e-a068-84cb03184357 (rabbitmq) was prepared for execution. 2026-04-16 07:35:19.177103 | orchestrator | 2026-04-16 07:35:19 | INFO  | It takes a moment until task 1558897b-5dbb-489e-a068-84cb03184357 (rabbitmq) has been started and output is visible here. 2026-04-16 07:36:02.418685 | orchestrator | 2026-04-16 07:36:02.418799 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:36:02.418815 | orchestrator | 2026-04-16 07:36:02.418827 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:36:02.418838 | orchestrator | Thursday 16 April 2026 07:35:24 +0000 (0:00:01.969) 0:00:01.969 ******** 2026-04-16 07:36:02.418848 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:36:02.418859 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:36:02.418869 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:36:02.418879 | orchestrator | 2026-04-16 07:36:02.418889 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:36:02.418899 | orchestrator | Thursday 16 April 2026 07:35:26 +0000 (0:00:02.216) 0:00:04.185 ******** 2026-04-16 07:36:02.418909 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-04-16 07:36:02.418920 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-04-16 07:36:02.418929 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-04-16 07:36:02.418939 | orchestrator | 2026-04-16 07:36:02.418949 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-04-16 07:36:02.418959 | orchestrator | 2026-04-16 07:36:02.418969 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-16 07:36:02.418979 | orchestrator | Thursday 16 April 2026 07:35:29 +0000 (0:00:02.569) 0:00:06.755 ******** 2026-04-16 07:36:02.418989 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:36:02.419000 | orchestrator | 2026-04-16 07:36:02.419010 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-16 07:36:02.419020 | orchestrator | Thursday 16 April 2026 07:35:32 +0000 (0:00:02.994) 0:00:09.750 ******** 2026-04-16 07:36:02.419048 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:36:02.419058 | orchestrator | 2026-04-16 07:36:02.419068 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-04-16 07:36:02.419078 | orchestrator | Thursday 16 April 2026 07:35:34 +0000 (0:00:02.484) 0:00:12.235 ******** 2026-04-16 07:36:02.419088 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:36:02.419097 | orchestrator | 2026-04-16 07:36:02.419107 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-04-16 07:36:02.419124 | orchestrator | Thursday 16 April 2026 07:35:37 +0000 (0:00:02.991) 0:00:15.226 ******** 2026-04-16 07:36:02.419134 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:36:02.419145 | orchestrator | 2026-04-16 07:36:02.419154 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-04-16 07:36:02.419164 | orchestrator | Thursday 16 April 2026 07:35:47 +0000 (0:00:09.774) 0:00:25.001 ******** 2026-04-16 07:36:02.419174 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 07:36:02.419183 | orchestrator |  "changed": false, 2026-04-16 07:36:02.419193 | orchestrator |  "msg": "All assertions passed" 2026-04-16 07:36:02.419203 | orchestrator | } 2026-04-16 07:36:02.419213 | orchestrator | 2026-04-16 07:36:02.419223 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-04-16 07:36:02.419232 | orchestrator | Thursday 16 April 2026 07:35:48 +0000 (0:00:01.290) 0:00:26.291 ******** 2026-04-16 07:36:02.419242 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 07:36:02.419252 | orchestrator |  "changed": false, 2026-04-16 07:36:02.419262 | orchestrator |  "msg": "All assertions passed" 2026-04-16 07:36:02.419272 | orchestrator | } 2026-04-16 07:36:02.419281 | orchestrator | 2026-04-16 07:36:02.419291 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-16 07:36:02.419301 | orchestrator | Thursday 16 April 2026 07:35:50 +0000 (0:00:01.660) 0:00:27.952 ******** 2026-04-16 07:36:02.419311 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:36:02.419320 | orchestrator | 2026-04-16 07:36:02.419330 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-04-16 07:36:02.419340 | orchestrator | Thursday 16 April 2026 07:35:52 +0000 (0:00:01.757) 0:00:29.709 ******** 2026-04-16 07:36:02.419349 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:36:02.419359 | orchestrator | 2026-04-16 07:36:02.419369 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-04-16 07:36:02.419378 | orchestrator | Thursday 16 April 2026 07:35:54 +0000 (0:00:02.176) 0:00:31.886 ******** 2026-04-16 07:36:02.419388 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:36:02.419398 | orchestrator | 2026-04-16 07:36:02.419407 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-04-16 07:36:02.419417 | orchestrator | Thursday 16 April 2026 07:35:57 +0000 (0:00:03.017) 0:00:34.904 ******** 2026-04-16 07:36:02.419426 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:36:02.419436 | orchestrator | 2026-04-16 07:36:02.419445 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-04-16 07:36:02.419455 | orchestrator | Thursday 16 April 2026 07:35:58 +0000 (0:00:01.663) 0:00:36.567 ******** 2026-04-16 07:36:02.419488 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:02.419511 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:02.419528 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:02.419540 | orchestrator | 2026-04-16 07:36:02.419550 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-04-16 07:36:02.419559 | orchestrator | Thursday 16 April 2026 07:36:01 +0000 (0:00:02.091) 0:00:38.659 ******** 2026-04-16 07:36:02.419570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:02.419589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:22.407835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:22.407947 | orchestrator | 2026-04-16 07:36:22.407961 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-04-16 07:36:22.407971 | orchestrator | Thursday 16 April 2026 07:36:03 +0000 (0:00:02.483) 0:00:41.143 ******** 2026-04-16 07:36:22.407979 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-16 07:36:22.407988 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-16 07:36:22.407996 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-04-16 07:36:22.408014 | orchestrator | 2026-04-16 07:36:22.408030 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-04-16 07:36:22.408038 | orchestrator | Thursday 16 April 2026 07:36:05 +0000 (0:00:02.278) 0:00:43.421 ******** 2026-04-16 07:36:22.408046 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-16 07:36:22.408054 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-16 07:36:22.408061 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-04-16 07:36:22.408069 | orchestrator | 2026-04-16 07:36:22.408076 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-04-16 07:36:22.408083 | orchestrator | Thursday 16 April 2026 07:36:08 +0000 (0:00:02.742) 0:00:46.164 ******** 2026-04-16 07:36:22.408091 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-16 07:36:22.408098 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-16 07:36:22.408105 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-04-16 07:36:22.408112 | orchestrator | 2026-04-16 07:36:22.408119 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-04-16 07:36:22.408126 | orchestrator | Thursday 16 April 2026 07:36:10 +0000 (0:00:02.411) 0:00:48.576 ******** 2026-04-16 07:36:22.408152 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-16 07:36:22.408160 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-16 07:36:22.408177 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-04-16 07:36:22.408184 | orchestrator | 2026-04-16 07:36:22.408192 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-04-16 07:36:22.408199 | orchestrator | Thursday 16 April 2026 07:36:13 +0000 (0:00:02.516) 0:00:51.092 ******** 2026-04-16 07:36:22.408206 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-16 07:36:22.408214 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-16 07:36:22.408221 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-04-16 07:36:22.408228 | orchestrator | 2026-04-16 07:36:22.408235 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-04-16 07:36:22.408243 | orchestrator | Thursday 16 April 2026 07:36:15 +0000 (0:00:02.290) 0:00:53.383 ******** 2026-04-16 07:36:22.408250 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-16 07:36:22.408257 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-16 07:36:22.408264 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-04-16 07:36:22.408271 | orchestrator | 2026-04-16 07:36:22.408279 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-04-16 07:36:22.408286 | orchestrator | Thursday 16 April 2026 07:36:18 +0000 (0:00:02.335) 0:00:55.718 ******** 2026-04-16 07:36:22.408293 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:36:22.408301 | orchestrator | 2026-04-16 07:36:22.408323 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-04-16 07:36:22.408331 | orchestrator | Thursday 16 April 2026 07:36:19 +0000 (0:00:01.792) 0:00:57.511 ******** 2026-04-16 07:36:22.408345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:22.408354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:22.408371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:22.408381 | orchestrator | 2026-04-16 07:36:22.408389 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-04-16 07:36:22.408398 | orchestrator | Thursday 16 April 2026 07:36:22 +0000 (0:00:02.386) 0:00:59.898 ******** 2026-04-16 07:36:22.408414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:36:30.690912 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:36:30.691043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:36:30.691064 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:36:30.691102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:36:30.691115 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:36:30.691127 | orchestrator | 2026-04-16 07:36:30.691140 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-04-16 07:36:30.691152 | orchestrator | Thursday 16 April 2026 07:36:23 +0000 (0:00:01.373) 0:01:01.271 ******** 2026-04-16 07:36:30.691164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:36:30.691176 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:36:30.691214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:36:30.691228 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:36:30.691240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:36:30.691259 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:36:30.691270 | orchestrator | 2026-04-16 07:36:30.691282 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-16 07:36:30.691293 | orchestrator | Thursday 16 April 2026 07:36:25 +0000 (0:00:01.915) 0:01:03.186 ******** 2026-04-16 07:36:30.691304 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:36:30.691317 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:36:30.691328 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:36:30.691339 | orchestrator | 2026-04-16 07:36:30.691350 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-04-16 07:36:30.691361 | orchestrator | Thursday 16 April 2026 07:36:29 +0000 (0:00:04.117) 0:01:07.304 ******** 2026-04-16 07:36:30.691372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:36:30.691410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:38:16.431299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-04-16 07:38:16.431444 | orchestrator | 2026-04-16 07:38:16.431463 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-04-16 07:38:16.431476 | orchestrator | Thursday 16 April 2026 07:36:32 +0000 (0:00:02.360) 0:01:09.665 ******** 2026-04-16 07:38:16.431488 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:38:16.431499 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:38:16.431510 | orchestrator | } 2026-04-16 07:38:16.431522 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:38:16.431532 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:38:16.431543 | orchestrator | } 2026-04-16 07:38:16.431555 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:38:16.431566 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:38:16.431577 | orchestrator | } 2026-04-16 07:38:16.431691 | orchestrator | 2026-04-16 07:38:16.431704 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:38:16.431715 | orchestrator | Thursday 16 April 2026 07:36:33 +0000 (0:00:01.510) 0:01:11.176 ******** 2026-04-16 07:38:16.431728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:38:16.431741 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:38:16.431753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:38:16.431774 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:38:16.431815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-04-16 07:38:16.431831 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:38:16.431844 | orchestrator | 2026-04-16 07:38:16.431856 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-04-16 07:38:16.431869 | orchestrator | Thursday 16 April 2026 07:36:35 +0000 (0:00:01.938) 0:01:13.114 ******** 2026-04-16 07:38:16.431881 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:38:16.431894 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:38:16.431907 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:38:16.431919 | orchestrator | 2026-04-16 07:38:16.431931 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-16 07:38:16.431944 | orchestrator | 2026-04-16 07:38:16.431957 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-16 07:38:16.431969 | orchestrator | Thursday 16 April 2026 07:36:36 +0000 (0:00:01.431) 0:01:14.545 ******** 2026-04-16 07:38:16.431982 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:38:16.431995 | orchestrator | 2026-04-16 07:38:16.432008 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-16 07:38:16.432020 | orchestrator | Thursday 16 April 2026 07:36:38 +0000 (0:00:02.064) 0:01:16.610 ******** 2026-04-16 07:38:16.432033 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:38:16.432046 | orchestrator | 2026-04-16 07:38:16.432058 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-16 07:38:16.432071 | orchestrator | Thursday 16 April 2026 07:36:48 +0000 (0:00:09.139) 0:01:25.749 ******** 2026-04-16 07:38:16.432084 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:38:16.432096 | orchestrator | 2026-04-16 07:38:16.432109 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-16 07:38:16.432121 | orchestrator | Thursday 16 April 2026 07:36:57 +0000 (0:00:09.091) 0:01:34.841 ******** 2026-04-16 07:38:16.432132 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:38:16.432143 | orchestrator | 2026-04-16 07:38:16.432154 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-16 07:38:16.432165 | orchestrator | 2026-04-16 07:38:16.432176 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-16 07:38:16.432187 | orchestrator | Thursday 16 April 2026 07:37:06 +0000 (0:00:09.733) 0:01:44.575 ******** 2026-04-16 07:38:16.432197 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:38:16.432208 | orchestrator | 2026-04-16 07:38:16.432219 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-16 07:38:16.432230 | orchestrator | Thursday 16 April 2026 07:37:08 +0000 (0:00:01.742) 0:01:46.318 ******** 2026-04-16 07:38:16.432241 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:38:16.432252 | orchestrator | 2026-04-16 07:38:16.432263 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-16 07:38:16.432274 | orchestrator | Thursday 16 April 2026 07:37:17 +0000 (0:00:09.218) 0:01:55.536 ******** 2026-04-16 07:38:16.432291 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:38:16.432302 | orchestrator | 2026-04-16 07:38:16.432313 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-16 07:38:16.432324 | orchestrator | Thursday 16 April 2026 07:37:32 +0000 (0:00:14.367) 0:02:09.904 ******** 2026-04-16 07:38:16.432335 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:38:16.432346 | orchestrator | 2026-04-16 07:38:16.432357 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-04-16 07:38:16.432368 | orchestrator | 2026-04-16 07:38:16.432379 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-04-16 07:38:16.432390 | orchestrator | Thursday 16 April 2026 07:37:41 +0000 (0:00:09.345) 0:02:19.250 ******** 2026-04-16 07:38:16.432401 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:38:16.432411 | orchestrator | 2026-04-16 07:38:16.432422 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-04-16 07:38:16.432433 | orchestrator | Thursday 16 April 2026 07:37:43 +0000 (0:00:01.741) 0:02:20.992 ******** 2026-04-16 07:38:16.432444 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:38:16.432455 | orchestrator | 2026-04-16 07:38:16.432466 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-04-16 07:38:16.432477 | orchestrator | Thursday 16 April 2026 07:37:52 +0000 (0:00:09.400) 0:02:30.393 ******** 2026-04-16 07:38:16.432488 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:38:16.432499 | orchestrator | 2026-04-16 07:38:16.432510 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-04-16 07:38:16.432521 | orchestrator | Thursday 16 April 2026 07:38:06 +0000 (0:00:14.060) 0:02:44.453 ******** 2026-04-16 07:38:16.432531 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:38:16.432542 | orchestrator | 2026-04-16 07:38:16.432553 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-04-16 07:38:16.432564 | orchestrator | 2026-04-16 07:38:16.432575 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-04-16 07:38:16.432615 | orchestrator | Thursday 16 April 2026 07:38:16 +0000 (0:00:09.587) 0:02:54.040 ******** 2026-04-16 07:38:22.775389 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:38:22.775492 | orchestrator | 2026-04-16 07:38:22.775509 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-04-16 07:38:22.775522 | orchestrator | Thursday 16 April 2026 07:38:17 +0000 (0:00:01.495) 0:02:55.536 ******** 2026-04-16 07:38:22.775533 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:38:22.775563 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:38:22.775574 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:38:22.775697 | orchestrator | 2026-04-16 07:38:22.775711 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:38:22.775723 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:38:22.775736 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 07:38:22.775748 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 07:38:22.775759 | orchestrator | 2026-04-16 07:38:22.775770 | orchestrator | 2026-04-16 07:38:22.775781 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:38:22.775793 | orchestrator | Thursday 16 April 2026 07:38:22 +0000 (0:00:04.514) 0:03:00.050 ******** 2026-04-16 07:38:22.775804 | orchestrator | =============================================================================== 2026-04-16 07:38:22.775815 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.52s 2026-04-16 07:38:22.775826 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 28.67s 2026-04-16 07:38:22.775859 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 27.76s 2026-04-16 07:38:22.775870 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.77s 2026-04-16 07:38:22.775888 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.55s 2026-04-16 07:38:22.775906 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.51s 2026-04-16 07:38:22.775924 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.12s 2026-04-16 07:38:22.775944 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.02s 2026-04-16 07:38:22.775964 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.99s 2026-04-16 07:38:22.775984 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.99s 2026-04-16 07:38:22.775999 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.74s 2026-04-16 07:38:22.776011 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.57s 2026-04-16 07:38:22.776023 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.52s 2026-04-16 07:38:22.776035 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.48s 2026-04-16 07:38:22.776047 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.48s 2026-04-16 07:38:22.776059 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.41s 2026-04-16 07:38:22.776071 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.39s 2026-04-16 07:38:22.776083 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.36s 2026-04-16 07:38:22.776095 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.33s 2026-04-16 07:38:22.776108 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.29s 2026-04-16 07:38:22.949998 | orchestrator | + osism apply -a upgrade openvswitch 2026-04-16 07:38:24.258146 | orchestrator | 2026-04-16 07:38:24 | INFO  | Prepare task for execution of openvswitch. 2026-04-16 07:38:24.320456 | orchestrator | 2026-04-16 07:38:24 | INFO  | Task dcfa718c-4b79-4c80-b0d1-a34b33230065 (openvswitch) was prepared for execution. 2026-04-16 07:38:24.320552 | orchestrator | 2026-04-16 07:38:24 | INFO  | It takes a moment until task dcfa718c-4b79-4c80-b0d1-a34b33230065 (openvswitch) has been started and output is visible here. 2026-04-16 07:38:48.254333 | orchestrator | 2026-04-16 07:38:48.254434 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:38:48.254453 | orchestrator | 2026-04-16 07:38:48.254467 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:38:48.254496 | orchestrator | Thursday 16 April 2026 07:38:29 +0000 (0:00:01.524) 0:00:01.524 ******** 2026-04-16 07:38:48.254510 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:38:48.254524 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:38:48.254533 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:38:48.254540 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:38:48.254548 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:38:48.254557 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:38:48.254626 | orchestrator | 2026-04-16 07:38:48.254647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:38:48.254659 | orchestrator | Thursday 16 April 2026 07:38:31 +0000 (0:00:02.668) 0:00:04.193 ******** 2026-04-16 07:38:48.254671 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 07:38:48.254683 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 07:38:48.254695 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 07:38:48.254707 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 07:38:48.254719 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 07:38:48.254761 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-04-16 07:38:48.254774 | orchestrator | 2026-04-16 07:38:48.254802 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-04-16 07:38:48.254813 | orchestrator | 2026-04-16 07:38:48.254821 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-04-16 07:38:48.254828 | orchestrator | Thursday 16 April 2026 07:38:33 +0000 (0:00:02.254) 0:00:06.448 ******** 2026-04-16 07:38:48.254837 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:38:48.254846 | orchestrator | 2026-04-16 07:38:48.254853 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 07:38:48.254861 | orchestrator | Thursday 16 April 2026 07:38:38 +0000 (0:00:04.286) 0:00:10.734 ******** 2026-04-16 07:38:48.254868 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-16 07:38:48.254876 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-16 07:38:48.254884 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-16 07:38:48.254892 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-16 07:38:48.254900 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-16 07:38:48.254908 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-16 07:38:48.254917 | orchestrator | 2026-04-16 07:38:48.254925 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 07:38:48.254933 | orchestrator | Thursday 16 April 2026 07:38:40 +0000 (0:00:02.394) 0:00:13.128 ******** 2026-04-16 07:38:48.254942 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-04-16 07:38:48.254950 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-04-16 07:38:48.254959 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-04-16 07:38:48.254967 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-04-16 07:38:48.254976 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-04-16 07:38:48.254984 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-04-16 07:38:48.254992 | orchestrator | 2026-04-16 07:38:48.254999 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 07:38:48.255006 | orchestrator | Thursday 16 April 2026 07:38:43 +0000 (0:00:02.651) 0:00:15.779 ******** 2026-04-16 07:38:48.255013 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-04-16 07:38:48.255021 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:38:48.255029 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-04-16 07:38:48.255037 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:38:48.255044 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-04-16 07:38:48.255051 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:38:48.255058 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-04-16 07:38:48.255066 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:38:48.255076 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-04-16 07:38:48.255088 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:38:48.255107 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-04-16 07:38:48.255120 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:38:48.255131 | orchestrator | 2026-04-16 07:38:48.255143 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-04-16 07:38:48.255155 | orchestrator | Thursday 16 April 2026 07:38:45 +0000 (0:00:02.026) 0:00:17.806 ******** 2026-04-16 07:38:48.255166 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:38:48.255177 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:38:48.255188 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:38:48.255199 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:38:48.255210 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:38:48.255221 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:38:48.255234 | orchestrator | 2026-04-16 07:38:48.255256 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-04-16 07:38:48.255269 | orchestrator | Thursday 16 April 2026 07:38:47 +0000 (0:00:01.951) 0:00:19.757 ******** 2026-04-16 07:38:48.255309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:48.255338 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:48.255352 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:48.255366 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:48.255380 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:48.255392 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:48.255420 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557478 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557640 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557657 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557664 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557690 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557698 | orchestrator | 2026-04-16 07:38:51.557707 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-04-16 07:38:51.557716 | orchestrator | Thursday 16 April 2026 07:38:49 +0000 (0:00:02.471) 0:00:22.228 ******** 2026-04-16 07:38:51.557741 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557755 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557770 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557783 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557790 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:51.557805 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:57.085986 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086146 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086159 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086189 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086201 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086211 | orchestrator | 2026-04-16 07:38:57.086223 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-04-16 07:38:57.086234 | orchestrator | Thursday 16 April 2026 07:38:53 +0000 (0:00:03.501) 0:00:25.730 ******** 2026-04-16 07:38:57.086244 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:38:57.086255 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:38:57.086265 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:38:57.086275 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:38:57.086284 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:38:57.086308 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:38:57.086319 | orchestrator | 2026-04-16 07:38:57.086329 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-04-16 07:38:57.086356 | orchestrator | Thursday 16 April 2026 07:38:55 +0000 (0:00:02.298) 0:00:28.028 ******** 2026-04-16 07:38:57.086367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086396 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:38:57.086438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-04-16 07:39:01.345790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:39:01.345924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:39:01.345941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:39:01.345953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:39:01.345965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:39:01.346010 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-04-16 07:39:01.346091 | orchestrator | 2026-04-16 07:39:01.346105 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-04-16 07:39:01.346117 | orchestrator | Thursday 16 April 2026 07:38:58 +0000 (0:00:03.398) 0:00:31.427 ******** 2026-04-16 07:39:01.346138 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:39:01.346150 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:39:01.346162 | orchestrator | } 2026-04-16 07:39:01.346173 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:39:01.346184 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:39:01.346195 | orchestrator | } 2026-04-16 07:39:01.346205 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:39:01.346216 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:39:01.346227 | orchestrator | } 2026-04-16 07:39:01.346239 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 07:39:01.346250 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:39:01.346261 | orchestrator | } 2026-04-16 07:39:01.346272 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 07:39:01.346283 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:39:01.346294 | orchestrator | } 2026-04-16 07:39:01.346304 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 07:39:01.346315 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:39:01.346328 | orchestrator | } 2026-04-16 07:39:01.346340 | orchestrator | 2026-04-16 07:39:01.346354 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:39:01.346367 | orchestrator | Thursday 16 April 2026 07:39:00 +0000 (0:00:01.822) 0:00:33.250 ******** 2026-04-16 07:39:01.346381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-16 07:39:01.346401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-16 07:39:01.346423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-16 07:39:01.346452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-16 07:39:01.346484 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:39:01.346516 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:39:31.980880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-16 07:39:31.980993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-16 07:39:31.981010 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:39:31.981022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-16 07:39:31.981033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-16 07:39:31.981043 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:39:31.981069 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-16 07:39:31.981120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-16 07:39:31.981131 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:39:31.981141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-04-16 07:39:31.981152 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-04-16 07:39:31.981162 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:39:31.981172 | orchestrator | 2026-04-16 07:39:31.981183 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 07:39:31.981194 | orchestrator | Thursday 16 April 2026 07:39:03 +0000 (0:00:02.709) 0:00:35.959 ******** 2026-04-16 07:39:31.981204 | orchestrator | 2026-04-16 07:39:31.981215 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 07:39:31.981224 | orchestrator | Thursday 16 April 2026 07:39:04 +0000 (0:00:00.656) 0:00:36.616 ******** 2026-04-16 07:39:31.981234 | orchestrator | 2026-04-16 07:39:31.981244 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 07:39:31.981254 | orchestrator | Thursday 16 April 2026 07:39:04 +0000 (0:00:00.516) 0:00:37.132 ******** 2026-04-16 07:39:31.981263 | orchestrator | 2026-04-16 07:39:31.981273 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 07:39:31.981283 | orchestrator | Thursday 16 April 2026 07:39:05 +0000 (0:00:00.531) 0:00:37.664 ******** 2026-04-16 07:39:31.981293 | orchestrator | 2026-04-16 07:39:31.981302 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 07:39:31.981312 | orchestrator | Thursday 16 April 2026 07:39:05 +0000 (0:00:00.528) 0:00:38.192 ******** 2026-04-16 07:39:31.981322 | orchestrator | 2026-04-16 07:39:31.981331 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-04-16 07:39:31.981341 | orchestrator | Thursday 16 April 2026 07:39:06 +0000 (0:00:00.524) 0:00:38.716 ******** 2026-04-16 07:39:31.981358 | orchestrator | 2026-04-16 07:39:31.981368 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-04-16 07:39:31.981378 | orchestrator | Thursday 16 April 2026 07:39:07 +0000 (0:00:00.985) 0:00:39.702 ******** 2026-04-16 07:39:31.981388 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:39:31.981398 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:39:31.981408 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:39:31.981420 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:39:31.981431 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:39:31.981442 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:39:31.981453 | orchestrator | 2026-04-16 07:39:31.981464 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-04-16 07:39:31.981476 | orchestrator | Thursday 16 April 2026 07:39:18 +0000 (0:00:11.409) 0:00:51.112 ******** 2026-04-16 07:39:31.981487 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:39:31.981500 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:39:31.981511 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:39:31.981523 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:39:31.981533 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:39:31.981544 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:39:31.981583 | orchestrator | 2026-04-16 07:39:31.981600 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-04-16 07:39:31.981611 | orchestrator | Thursday 16 April 2026 07:39:20 +0000 (0:00:02.255) 0:00:53.367 ******** 2026-04-16 07:39:31.981623 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:39:31.981633 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:39:31.981644 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:39:31.981655 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:39:31.981665 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:39:31.981675 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:39:31.981684 | orchestrator | 2026-04-16 07:39:31.981694 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-04-16 07:39:31.981710 | orchestrator | Thursday 16 April 2026 07:39:31 +0000 (0:00:11.058) 0:01:04.426 ******** 2026-04-16 07:39:47.832050 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-04-16 07:39:47.832155 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-04-16 07:39:47.832170 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-04-16 07:39:47.832181 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-04-16 07:39:47.832191 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-04-16 07:39:47.832202 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-04-16 07:39:47.832212 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-04-16 07:39:47.832222 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-04-16 07:39:47.832232 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-04-16 07:39:47.832242 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-04-16 07:39:47.832251 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-04-16 07:39:47.832261 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-04-16 07:39:47.832271 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 07:39:47.832281 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 07:39:47.832314 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 07:39:47.832324 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 07:39:47.832334 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 07:39:47.832344 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-04-16 07:39:47.832354 | orchestrator | 2026-04-16 07:39:47.832365 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-04-16 07:39:47.832376 | orchestrator | Thursday 16 April 2026 07:39:40 +0000 (0:00:08.242) 0:01:12.668 ******** 2026-04-16 07:39:47.832386 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-04-16 07:39:47.832397 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:39:47.832407 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-04-16 07:39:47.832417 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:39:47.832426 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-04-16 07:39:47.832436 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:39:47.832445 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-04-16 07:39:47.832455 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-04-16 07:39:47.832465 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-04-16 07:39:47.832474 | orchestrator | 2026-04-16 07:39:47.832484 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-04-16 07:39:47.832494 | orchestrator | Thursday 16 April 2026 07:39:43 +0000 (0:00:03.201) 0:01:15.869 ******** 2026-04-16 07:39:47.832504 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-04-16 07:39:47.832513 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:39:47.832523 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-04-16 07:39:47.832533 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:39:47.832542 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-04-16 07:39:47.832587 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:39:47.832597 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-04-16 07:39:47.832609 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-04-16 07:39:47.832620 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-04-16 07:39:47.832631 | orchestrator | 2026-04-16 07:39:47.832641 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:39:47.832667 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:39:47.832681 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:39:47.832692 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 07:39:47.832703 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:39:47.832731 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:39:47.832743 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:39:47.832754 | orchestrator | 2026-04-16 07:39:47.832765 | orchestrator | 2026-04-16 07:39:47.832776 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:39:47.832794 | orchestrator | Thursday 16 April 2026 07:39:47 +0000 (0:00:04.076) 0:01:19.946 ******** 2026-04-16 07:39:47.832805 | orchestrator | =============================================================================== 2026-04-16 07:39:47.832816 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.41s 2026-04-16 07:39:47.832827 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.06s 2026-04-16 07:39:47.832837 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.24s 2026-04-16 07:39:47.832848 | orchestrator | openvswitch : include_tasks --------------------------------------------- 4.29s 2026-04-16 07:39:47.832859 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.08s 2026-04-16 07:39:47.832870 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.74s 2026-04-16 07:39:47.832881 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.50s 2026-04-16 07:39:47.832891 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.40s 2026-04-16 07:39:47.832902 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.20s 2026-04-16 07:39:47.832913 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.71s 2026-04-16 07:39:47.832924 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.67s 2026-04-16 07:39:47.832936 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.65s 2026-04-16 07:39:47.832946 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.47s 2026-04-16 07:39:47.832957 | orchestrator | module-load : Load modules ---------------------------------------------- 2.39s 2026-04-16 07:39:47.832968 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.30s 2026-04-16 07:39:47.832979 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.26s 2026-04-16 07:39:47.832988 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.25s 2026-04-16 07:39:47.832998 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.03s 2026-04-16 07:39:47.833007 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.95s 2026-04-16 07:39:47.833017 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 1.82s 2026-04-16 07:39:47.991590 | orchestrator | + osism apply -a upgrade ovn 2026-04-16 07:39:49.232008 | orchestrator | 2026-04-16 07:39:49 | INFO  | Prepare task for execution of ovn. 2026-04-16 07:39:49.292643 | orchestrator | 2026-04-16 07:39:49 | INFO  | Task b13c7656-c42b-41cd-8ca1-fdf701b98bb5 (ovn) was prepared for execution. 2026-04-16 07:39:49.292722 | orchestrator | 2026-04-16 07:39:49 | INFO  | It takes a moment until task b13c7656-c42b-41cd-8ca1-fdf701b98bb5 (ovn) has been started and output is visible here. 2026-04-16 07:40:09.692653 | orchestrator | 2026-04-16 07:40:09.692753 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 07:40:09.692765 | orchestrator | 2026-04-16 07:40:09.692773 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 07:40:09.692780 | orchestrator | Thursday 16 April 2026 07:39:54 +0000 (0:00:02.062) 0:00:02.062 ******** 2026-04-16 07:40:09.692788 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:40:09.692796 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:40:09.692803 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:40:09.692810 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:40:09.692817 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:40:09.692824 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:40:09.692831 | orchestrator | 2026-04-16 07:40:09.692839 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 07:40:09.692847 | orchestrator | Thursday 16 April 2026 07:39:57 +0000 (0:00:02.862) 0:00:04.925 ******** 2026-04-16 07:40:09.692855 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-04-16 07:40:09.692863 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-04-16 07:40:09.692892 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-04-16 07:40:09.692901 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-04-16 07:40:09.692908 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-04-16 07:40:09.692915 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-04-16 07:40:09.692922 | orchestrator | 2026-04-16 07:40:09.692943 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-04-16 07:40:09.692951 | orchestrator | 2026-04-16 07:40:09.692959 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-04-16 07:40:09.692966 | orchestrator | Thursday 16 April 2026 07:40:00 +0000 (0:00:03.123) 0:00:08.048 ******** 2026-04-16 07:40:09.692975 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:40:09.692984 | orchestrator | 2026-04-16 07:40:09.692992 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-04-16 07:40:09.693000 | orchestrator | Thursday 16 April 2026 07:40:03 +0000 (0:00:03.414) 0:00:11.462 ******** 2026-04-16 07:40:09.693009 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693019 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693026 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693033 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693039 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693064 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693076 | orchestrator | 2026-04-16 07:40:09.693083 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-04-16 07:40:09.693089 | orchestrator | Thursday 16 April 2026 07:40:06 +0000 (0:00:02.431) 0:00:13.894 ******** 2026-04-16 07:40:09.693096 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693107 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693114 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693121 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693134 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693140 | orchestrator | 2026-04-16 07:40:09.693147 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-04-16 07:40:09.693154 | orchestrator | Thursday 16 April 2026 07:40:09 +0000 (0:00:02.777) 0:00:16.672 ******** 2026-04-16 07:40:09.693161 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693170 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:09.693189 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526306 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526462 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526483 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526509 | orchestrator | 2026-04-16 07:40:18.526523 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-04-16 07:40:18.526630 | orchestrator | Thursday 16 April 2026 07:40:11 +0000 (0:00:02.195) 0:00:18.867 ******** 2026-04-16 07:40:18.526646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526659 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526670 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526681 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526715 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526753 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526773 | orchestrator | 2026-04-16 07:40:18.526793 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-04-16 07:40:18.526811 | orchestrator | Thursday 16 April 2026 07:40:14 +0000 (0:00:02.868) 0:00:21.736 ******** 2026-04-16 07:40:18.526842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526926 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526944 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:40:18.526982 | orchestrator | 2026-04-16 07:40:18.527001 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-04-16 07:40:18.527019 | orchestrator | Thursday 16 April 2026 07:40:16 +0000 (0:00:02.582) 0:00:24.318 ******** 2026-04-16 07:40:18.527037 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:40:18.527056 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:40:18.527073 | orchestrator | } 2026-04-16 07:40:18.527090 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:40:18.527107 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:40:18.527126 | orchestrator | } 2026-04-16 07:40:18.527142 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:40:18.527158 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:40:18.527183 | orchestrator | } 2026-04-16 07:40:18.527206 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 07:40:18.527224 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:40:18.527242 | orchestrator | } 2026-04-16 07:40:18.527271 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 07:40:18.527290 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:40:18.527308 | orchestrator | } 2026-04-16 07:40:18.527324 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 07:40:18.527342 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:40:18.527360 | orchestrator | } 2026-04-16 07:40:18.527379 | orchestrator | 2026-04-16 07:40:18.527398 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:40:18.527416 | orchestrator | Thursday 16 April 2026 07:40:18 +0000 (0:00:01.698) 0:00:26.017 ******** 2026-04-16 07:40:18.527454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:40:42.327089 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:40:42.327224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:40:42.327246 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:40:42.327259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:40:42.327271 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:40:42.327283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:40:42.327295 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:40:42.327307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:40:42.327341 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:40:42.327353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:40:42.327365 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:40:42.327376 | orchestrator | 2026-04-16 07:40:42.327388 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-04-16 07:40:42.327400 | orchestrator | Thursday 16 April 2026 07:40:20 +0000 (0:00:02.362) 0:00:28.379 ******** 2026-04-16 07:40:42.327411 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:40:42.327423 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:40:42.327433 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:40:42.327444 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:40:42.327455 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:40:42.327465 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:40:42.327476 | orchestrator | 2026-04-16 07:40:42.327487 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-04-16 07:40:42.327498 | orchestrator | Thursday 16 April 2026 07:40:24 +0000 (0:00:03.716) 0:00:32.095 ******** 2026-04-16 07:40:42.327509 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-04-16 07:40:42.327520 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-04-16 07:40:42.327560 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-04-16 07:40:42.327571 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-04-16 07:40:42.327582 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-04-16 07:40:42.327593 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-04-16 07:40:42.327603 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 07:40:42.327614 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 07:40:42.327625 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 07:40:42.327638 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 07:40:42.327650 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 07:40:42.327680 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-04-16 07:40:42.327694 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-16 07:40:42.327709 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-16 07:40:42.327727 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-16 07:40:42.327740 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-16 07:40:42.327753 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-16 07:40:42.327774 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-04-16 07:40:42.327787 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 07:40:42.327800 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 07:40:42.327812 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 07:40:42.327824 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 07:40:42.327837 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 07:40:42.327849 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-04-16 07:40:42.327861 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 07:40:42.327874 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 07:40:42.327886 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 07:40:42.327898 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 07:40:42.327911 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 07:40:42.327924 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-04-16 07:40:42.327937 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 07:40:42.327947 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 07:40:42.327958 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 07:40:42.327969 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 07:40:42.327980 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 07:40:42.327991 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-04-16 07:40:42.328002 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-16 07:40:42.328013 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-16 07:40:42.328024 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-16 07:40:42.328035 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-16 07:40:42.328046 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-04-16 07:40:42.328056 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-04-16 07:40:42.328068 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-04-16 07:40:42.328080 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-04-16 07:40:42.328091 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-04-16 07:40:42.328102 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-04-16 07:40:42.328113 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-04-16 07:40:42.328138 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-04-16 07:43:34.222831 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-16 07:43:34.222947 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-16 07:43:34.222979 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-16 07:43:34.222994 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-16 07:43:34.223006 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-04-16 07:43:34.223018 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-04-16 07:43:34.223033 | orchestrator | 2026-04-16 07:43:34.223054 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 07:43:34.223079 | orchestrator | Thursday 16 April 2026 07:40:45 +0000 (0:00:21.074) 0:00:53.170 ******** 2026-04-16 07:43:34.223107 | orchestrator | 2026-04-16 07:43:34.223127 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 07:43:34.223146 | orchestrator | Thursday 16 April 2026 07:40:46 +0000 (0:00:00.438) 0:00:53.609 ******** 2026-04-16 07:43:34.223165 | orchestrator | 2026-04-16 07:43:34.223185 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 07:43:34.223204 | orchestrator | Thursday 16 April 2026 07:40:46 +0000 (0:00:00.582) 0:00:54.191 ******** 2026-04-16 07:43:34.223224 | orchestrator | 2026-04-16 07:43:34.223243 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 07:43:34.223264 | orchestrator | Thursday 16 April 2026 07:40:47 +0000 (0:00:00.439) 0:00:54.630 ******** 2026-04-16 07:43:34.223285 | orchestrator | 2026-04-16 07:43:34.223299 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 07:43:34.223310 | orchestrator | Thursday 16 April 2026 07:40:47 +0000 (0:00:00.434) 0:00:55.065 ******** 2026-04-16 07:43:34.223321 | orchestrator | 2026-04-16 07:43:34.223332 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-04-16 07:43:34.223343 | orchestrator | Thursday 16 April 2026 07:40:47 +0000 (0:00:00.432) 0:00:55.497 ******** 2026-04-16 07:43:34.223354 | orchestrator | 2026-04-16 07:43:34.223367 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-04-16 07:43:34.223379 | orchestrator | Thursday 16 April 2026 07:40:48 +0000 (0:00:00.801) 0:00:56.299 ******** 2026-04-16 07:43:34.223391 | orchestrator | 2026-04-16 07:43:34.223404 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-04-16 07:43:34.223417 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:43:34.223430 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:43:34.223443 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:43:34.223455 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:43:34.223503 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:43:34.223517 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:43:34.223529 | orchestrator | 2026-04-16 07:43:34.223542 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-04-16 07:43:34.223554 | orchestrator | 2026-04-16 07:43:34.223567 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-16 07:43:34.223579 | orchestrator | Thursday 16 April 2026 07:43:00 +0000 (0:02:12.138) 0:03:08.438 ******** 2026-04-16 07:43:34.223592 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:43:34.223604 | orchestrator | 2026-04-16 07:43:34.223617 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-16 07:43:34.223656 | orchestrator | Thursday 16 April 2026 07:43:02 +0000 (0:00:01.857) 0:03:10.295 ******** 2026-04-16 07:43:34.223670 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 07:43:34.223682 | orchestrator | 2026-04-16 07:43:34.223695 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-04-16 07:43:34.223707 | orchestrator | Thursday 16 April 2026 07:43:04 +0000 (0:00:01.778) 0:03:12.074 ******** 2026-04-16 07:43:34.223720 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.223731 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.223742 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.223753 | orchestrator | 2026-04-16 07:43:34.223763 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-04-16 07:43:34.223774 | orchestrator | Thursday 16 April 2026 07:43:06 +0000 (0:00:01.814) 0:03:13.888 ******** 2026-04-16 07:43:34.223785 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.223796 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.223806 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.223817 | orchestrator | 2026-04-16 07:43:34.223828 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-04-16 07:43:34.223839 | orchestrator | Thursday 16 April 2026 07:43:07 +0000 (0:00:01.507) 0:03:15.396 ******** 2026-04-16 07:43:34.223850 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.223860 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.223871 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.223882 | orchestrator | 2026-04-16 07:43:34.223893 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-04-16 07:43:34.223904 | orchestrator | Thursday 16 April 2026 07:43:09 +0000 (0:00:01.370) 0:03:16.766 ******** 2026-04-16 07:43:34.223914 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.223925 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.223936 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.223946 | orchestrator | 2026-04-16 07:43:34.223957 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-04-16 07:43:34.223968 | orchestrator | Thursday 16 April 2026 07:43:10 +0000 (0:00:01.337) 0:03:18.104 ******** 2026-04-16 07:43:34.223979 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224010 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224022 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224033 | orchestrator | 2026-04-16 07:43:34.224043 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-04-16 07:43:34.224054 | orchestrator | Thursday 16 April 2026 07:43:11 +0000 (0:00:01.390) 0:03:19.495 ******** 2026-04-16 07:43:34.224065 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:43:34.224076 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:43:34.224087 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:43:34.224097 | orchestrator | 2026-04-16 07:43:34.224116 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-04-16 07:43:34.224127 | orchestrator | Thursday 16 April 2026 07:43:13 +0000 (0:00:01.467) 0:03:20.962 ******** 2026-04-16 07:43:34.224138 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224148 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224159 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224170 | orchestrator | 2026-04-16 07:43:34.224181 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-04-16 07:43:34.224192 | orchestrator | Thursday 16 April 2026 07:43:15 +0000 (0:00:01.769) 0:03:22.732 ******** 2026-04-16 07:43:34.224203 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224213 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224224 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224235 | orchestrator | 2026-04-16 07:43:34.224246 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-04-16 07:43:34.224256 | orchestrator | Thursday 16 April 2026 07:43:16 +0000 (0:00:01.369) 0:03:24.101 ******** 2026-04-16 07:43:34.224267 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224287 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224298 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224309 | orchestrator | 2026-04-16 07:43:34.224320 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-04-16 07:43:34.224331 | orchestrator | Thursday 16 April 2026 07:43:18 +0000 (0:00:01.824) 0:03:25.925 ******** 2026-04-16 07:43:34.224342 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224353 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224363 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224374 | orchestrator | 2026-04-16 07:43:34.224385 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-04-16 07:43:34.224396 | orchestrator | Thursday 16 April 2026 07:43:19 +0000 (0:00:01.346) 0:03:27.272 ******** 2026-04-16 07:43:34.224413 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:43:34.224439 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:43:34.224499 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:43:34.224519 | orchestrator | 2026-04-16 07:43:34.224536 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-04-16 07:43:34.224553 | orchestrator | Thursday 16 April 2026 07:43:21 +0000 (0:00:01.314) 0:03:28.587 ******** 2026-04-16 07:43:34.224570 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:43:34.224588 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:43:34.224605 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:43:34.224624 | orchestrator | 2026-04-16 07:43:34.224658 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-04-16 07:43:34.224676 | orchestrator | Thursday 16 April 2026 07:43:22 +0000 (0:00:01.329) 0:03:29.917 ******** 2026-04-16 07:43:34.224692 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224709 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224726 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224744 | orchestrator | 2026-04-16 07:43:34.224762 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-04-16 07:43:34.224780 | orchestrator | Thursday 16 April 2026 07:43:24 +0000 (0:00:02.087) 0:03:32.004 ******** 2026-04-16 07:43:34.224800 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224819 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224837 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224856 | orchestrator | 2026-04-16 07:43:34.224867 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-04-16 07:43:34.224879 | orchestrator | Thursday 16 April 2026 07:43:25 +0000 (0:00:01.372) 0:03:33.377 ******** 2026-04-16 07:43:34.224890 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224900 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224911 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224922 | orchestrator | 2026-04-16 07:43:34.224933 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-04-16 07:43:34.224944 | orchestrator | Thursday 16 April 2026 07:43:27 +0000 (0:00:01.739) 0:03:35.116 ******** 2026-04-16 07:43:34.224954 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:43:34.224965 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:43:34.224976 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:43:34.224986 | orchestrator | 2026-04-16 07:43:34.224997 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-04-16 07:43:34.225008 | orchestrator | Thursday 16 April 2026 07:43:28 +0000 (0:00:01.346) 0:03:36.463 ******** 2026-04-16 07:43:34.225019 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:43:34.225030 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:43:34.225041 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:43:34.225051 | orchestrator | 2026-04-16 07:43:34.225062 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-04-16 07:43:34.225073 | orchestrator | Thursday 16 April 2026 07:43:30 +0000 (0:00:01.622) 0:03:38.085 ******** 2026-04-16 07:43:34.225084 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:43:34.225095 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:43:34.225105 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:43:34.225129 | orchestrator | 2026-04-16 07:43:34.225139 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-04-16 07:43:34.225150 | orchestrator | Thursday 16 April 2026 07:43:32 +0000 (0:00:02.003) 0:03:40.089 ******** 2026-04-16 07:43:34.225177 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371428 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371571 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371607 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371616 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:40.371670 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:40.371694 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:40.371711 | orchestrator | 2026-04-16 07:43:40.371722 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-04-16 07:43:40.371731 | orchestrator | Thursday 16 April 2026 07:43:36 +0000 (0:00:03.873) 0:03:43.963 ******** 2026-04-16 07:43:40.371740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371748 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371764 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:40.371791 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.861790 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.861883 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.861894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:54.861902 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.861927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:54.861934 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.861941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:54.861949 | orchestrator | 2026-04-16 07:43:54.861958 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-04-16 07:43:54.861966 | orchestrator | Thursday 16 April 2026 07:43:42 +0000 (0:00:06.000) 0:03:49.963 ******** 2026-04-16 07:43:54.861984 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-04-16 07:43:54.861991 | orchestrator | 2026-04-16 07:43:54.861998 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-04-16 07:43:54.862005 | orchestrator | Thursday 16 April 2026 07:43:44 +0000 (0:00:01.810) 0:03:51.773 ******** 2026-04-16 07:43:54.862012 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:43:54.862062 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:43:54.862080 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:43:54.862087 | orchestrator | 2026-04-16 07:43:54.862094 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-04-16 07:43:54.862100 | orchestrator | Thursday 16 April 2026 07:43:45 +0000 (0:00:01.709) 0:03:53.483 ******** 2026-04-16 07:43:54.862106 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:43:54.862112 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:43:54.862119 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:43:54.862132 | orchestrator | 2026-04-16 07:43:54.862140 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-04-16 07:43:54.862151 | orchestrator | Thursday 16 April 2026 07:43:48 +0000 (0:00:02.737) 0:03:56.221 ******** 2026-04-16 07:43:54.862162 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:43:54.862178 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:43:54.862189 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:43:54.862198 | orchestrator | 2026-04-16 07:43:54.862207 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-04-16 07:43:54.862217 | orchestrator | Thursday 16 April 2026 07:43:51 +0000 (0:00:02.550) 0:03:58.772 ******** 2026-04-16 07:43:54.862228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.862248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.862258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.862268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.862279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.862296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:54.862316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:59.602089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:59.602241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:43:59.602273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602288 | orchestrator | 2026-04-16 07:43:59.602305 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-16 07:43:59.602320 | orchestrator | Thursday 16 April 2026 07:43:56 +0000 (0:00:05.127) 0:04:03.900 ******** 2026-04-16 07:43:59.602336 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:43:59.602353 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:43:59.602368 | orchestrator | } 2026-04-16 07:43:59.602382 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:43:59.602396 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:43:59.602409 | orchestrator | } 2026-04-16 07:43:59.602423 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:43:59.602436 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:43:59.602502 | orchestrator | } 2026-04-16 07:43:59.602523 | orchestrator | 2026-04-16 07:43:59.602539 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 07:43:59.602555 | orchestrator | Thursday 16 April 2026 07:43:57 +0000 (0:00:01.338) 0:04:05.239 ******** 2026-04-16 07:43:59.602588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 07:43:59.602798 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 07:45:45.993351 | orchestrator | 2026-04-16 07:45:45.993561 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-04-16 07:45:45.993596 | orchestrator | Thursday 16 April 2026 07:44:00 +0000 (0:00:03.193) 0:04:08.432 ******** 2026-04-16 07:45:45.993617 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-16 07:45:45.993669 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-16 07:45:45.993690 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-16 07:45:45.993710 | orchestrator | 2026-04-16 07:45:45.993730 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-04-16 07:45:45.993750 | orchestrator | Thursday 16 April 2026 07:44:21 +0000 (0:00:20.395) 0:04:28.828 ******** 2026-04-16 07:45:45.993767 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 07:45:45.993787 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:45:45.993806 | orchestrator | } 2026-04-16 07:45:45.993825 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 07:45:45.993844 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:45:45.993863 | orchestrator | } 2026-04-16 07:45:45.993882 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 07:45:45.993901 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 07:45:45.993919 | orchestrator | } 2026-04-16 07:45:45.993939 | orchestrator | 2026-04-16 07:45:45.993960 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 07:45:45.993981 | orchestrator | Thursday 16 April 2026 07:44:22 +0000 (0:00:01.326) 0:04:30.154 ******** 2026-04-16 07:45:45.994000 | orchestrator | 2026-04-16 07:45:45.994090 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 07:45:45.994113 | orchestrator | Thursday 16 April 2026 07:44:23 +0000 (0:00:00.431) 0:04:30.585 ******** 2026-04-16 07:45:45.994133 | orchestrator | 2026-04-16 07:45:45.994148 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-04-16 07:45:45.994161 | orchestrator | Thursday 16 April 2026 07:44:23 +0000 (0:00:00.415) 0:04:31.001 ******** 2026-04-16 07:45:45.994174 | orchestrator | 2026-04-16 07:45:45.994187 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-04-16 07:45:45.994199 | orchestrator | Thursday 16 April 2026 07:44:24 +0000 (0:00:00.769) 0:04:31.770 ******** 2026-04-16 07:45:45.994212 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:45:45.994224 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:45:45.994235 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:45:45.994245 | orchestrator | 2026-04-16 07:45:45.994256 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-04-16 07:45:45.994267 | orchestrator | Thursday 16 April 2026 07:44:39 +0000 (0:00:15.340) 0:04:47.110 ******** 2026-04-16 07:45:45.994277 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:45:45.994288 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:45:45.994298 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:45:45.994309 | orchestrator | 2026-04-16 07:45:45.994319 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-04-16 07:45:45.994330 | orchestrator | Thursday 16 April 2026 07:44:55 +0000 (0:00:15.870) 0:05:02.981 ******** 2026-04-16 07:45:45.994341 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-04-16 07:45:45.994351 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-04-16 07:45:45.994392 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-04-16 07:45:45.994433 | orchestrator | 2026-04-16 07:45:45.994445 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-04-16 07:45:45.994456 | orchestrator | Thursday 16 April 2026 07:45:09 +0000 (0:00:14.355) 0:05:17.336 ******** 2026-04-16 07:45:45.994467 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:45:45.994478 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:45:45.994488 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:45:45.994499 | orchestrator | 2026-04-16 07:45:45.994510 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-04-16 07:45:45.994521 | orchestrator | Thursday 16 April 2026 07:45:26 +0000 (0:00:16.487) 0:05:33.824 ******** 2026-04-16 07:45:45.994532 | orchestrator | Pausing for 5 seconds 2026-04-16 07:45:45.994543 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:45:45.994554 | orchestrator | 2026-04-16 07:45:45.994564 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-04-16 07:45:45.994575 | orchestrator | Thursday 16 April 2026 07:45:32 +0000 (0:00:06.156) 0:05:39.981 ******** 2026-04-16 07:45:45.994586 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:45:45.994611 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:45:45.994622 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:45:45.994633 | orchestrator | 2026-04-16 07:45:45.994644 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-04-16 07:45:45.994655 | orchestrator | Thursday 16 April 2026 07:45:34 +0000 (0:00:02.016) 0:05:41.997 ******** 2026-04-16 07:45:45.994665 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:45:45.994676 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:45:45.994687 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:45:45.994697 | orchestrator | 2026-04-16 07:45:45.994708 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-04-16 07:45:45.994719 | orchestrator | Thursday 16 April 2026 07:45:36 +0000 (0:00:01.591) 0:05:43.589 ******** 2026-04-16 07:45:45.994730 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:45:45.994741 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:45:45.994751 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:45:45.994762 | orchestrator | 2026-04-16 07:45:45.994773 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-04-16 07:45:45.994784 | orchestrator | Thursday 16 April 2026 07:45:37 +0000 (0:00:01.820) 0:05:45.409 ******** 2026-04-16 07:45:45.994795 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:45:45.994805 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:45:45.994816 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:45:45.994826 | orchestrator | 2026-04-16 07:45:45.994837 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-04-16 07:45:45.994848 | orchestrator | Thursday 16 April 2026 07:45:39 +0000 (0:00:01.621) 0:05:47.030 ******** 2026-04-16 07:45:45.994858 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:45:45.994869 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:45:45.994880 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:45:45.994891 | orchestrator | 2026-04-16 07:45:45.994901 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-04-16 07:45:45.994935 | orchestrator | Thursday 16 April 2026 07:45:41 +0000 (0:00:02.022) 0:05:49.053 ******** 2026-04-16 07:45:45.994947 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:45:45.994958 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:45:45.994968 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:45:45.994979 | orchestrator | 2026-04-16 07:45:45.994990 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-04-16 07:45:45.995001 | orchestrator | Thursday 16 April 2026 07:45:43 +0000 (0:00:01.790) 0:05:50.844 ******** 2026-04-16 07:45:45.995012 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-04-16 07:45:45.995022 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-04-16 07:45:45.995033 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-04-16 07:45:45.995044 | orchestrator | 2026-04-16 07:45:45.995055 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 07:45:45.995075 | orchestrator | testbed-node-0 : ok=50  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 07:45:45.995088 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-16 07:45:45.995099 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-16 07:45:45.995110 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:45:45.995121 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:45:45.995132 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 07:45:45.995143 | orchestrator | 2026-04-16 07:45:45.995154 | orchestrator | 2026-04-16 07:45:45.995165 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 07:45:45.995175 | orchestrator | Thursday 16 April 2026 07:45:45 +0000 (0:00:02.452) 0:05:53.296 ******** 2026-04-16 07:45:45.995186 | orchestrator | =============================================================================== 2026-04-16 07:45:45.995197 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 132.14s 2026-04-16 07:45:45.995208 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.07s 2026-04-16 07:45:45.995219 | orchestrator | service-check-containers : ovn_db | Check containers with iteration ---- 20.40s 2026-04-16 07:45:45.995229 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 16.49s 2026-04-16 07:45:45.995240 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.87s 2026-04-16 07:45:45.995251 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.34s 2026-04-16 07:45:45.995262 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 14.36s 2026-04-16 07:45:45.995273 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.16s 2026-04-16 07:45:45.995284 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.00s 2026-04-16 07:45:45.995294 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.13s 2026-04-16 07:45:45.995305 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.87s 2026-04-16 07:45:45.995316 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.72s 2026-04-16 07:45:45.995327 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.42s 2026-04-16 07:45:45.995338 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.19s 2026-04-16 07:45:45.995353 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.13s 2026-04-16 07:45:45.995365 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.12s 2026-04-16 07:45:45.995375 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.87s 2026-04-16 07:45:45.995386 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.86s 2026-04-16 07:45:45.995397 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.78s 2026-04-16 07:45:45.995440 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.74s 2026-04-16 07:45:46.108474 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-16 07:45:46.108595 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 07:45:46.108623 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-04-16 07:45:46.113811 | orchestrator | + set -e 2026-04-16 07:45:46.113911 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 07:45:46.113967 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 07:45:46.113988 | orchestrator | ++ INTERACTIVE=false 2026-04-16 07:45:46.114005 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 07:45:46.114115 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 07:45:46.114136 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-04-16 07:45:47.212367 | orchestrator | 2026-04-16 07:45:47 | INFO  | Prepare task for execution of ceph-rolling_update. 2026-04-16 07:45:47.267511 | orchestrator | 2026-04-16 07:45:47 | INFO  | Task cb8d279d-d2ec-453b-802e-84ab530e8163 (ceph-rolling_update) was prepared for execution. 2026-04-16 07:45:47.267582 | orchestrator | 2026-04-16 07:45:47 | INFO  | It takes a moment until task cb8d279d-d2ec-453b-802e-84ab530e8163 (ceph-rolling_update) has been started and output is visible here. 2026-04-16 07:47:06.508644 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-04-16 07:47:06.508849 | orchestrator | 2.16.14 2026-04-16 07:47:06.508881 | orchestrator | 2026-04-16 07:47:06.508903 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-04-16 07:47:06.508923 | orchestrator | 2026-04-16 07:47:06.508942 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-04-16 07:47:06.508961 | orchestrator | Thursday 16 April 2026 07:45:54 +0000 (0:00:01.599) 0:00:01.599 ******** 2026-04-16 07:47:06.508980 | orchestrator | skipping: [localhost] 2026-04-16 07:47:06.509001 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-04-16 07:47:06.509021 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-04-16 07:47:06.509041 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-04-16 07:47:06.509062 | orchestrator | 2026-04-16 07:47:06.509081 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-04-16 07:47:06.509102 | orchestrator | 2026-04-16 07:47:06.509121 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-04-16 07:47:06.509142 | orchestrator | Thursday 16 April 2026 07:45:56 +0000 (0:00:02.039) 0:00:03.639 ******** 2026-04-16 07:47:06.509156 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 07:47:06.509171 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509184 | orchestrator | } 2026-04-16 07:47:06.509198 | orchestrator | ok: [testbed-node-1] => { 2026-04-16 07:47:06.509212 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509225 | orchestrator | } 2026-04-16 07:47:06.509237 | orchestrator | ok: [testbed-node-2] => { 2026-04-16 07:47:06.509250 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509263 | orchestrator | } 2026-04-16 07:47:06.509276 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 07:47:06.509288 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509303 | orchestrator | } 2026-04-16 07:47:06.509314 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 07:47:06.509325 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509336 | orchestrator | } 2026-04-16 07:47:06.509348 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 07:47:06.509359 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509396 | orchestrator | } 2026-04-16 07:47:06.509408 | orchestrator | ok: [testbed-manager] => { 2026-04-16 07:47:06.509419 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-04-16 07:47:06.509430 | orchestrator | } 2026-04-16 07:47:06.509440 | orchestrator | 2026-04-16 07:47:06.509450 | orchestrator | TASK [Gather facts] ************************************************************ 2026-04-16 07:47:06.509459 | orchestrator | Thursday 16 April 2026 07:46:01 +0000 (0:00:04.548) 0:00:08.188 ******** 2026-04-16 07:47:06.509469 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:06.509479 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:06.509521 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:06.509533 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:06.509542 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:06.509552 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:06.509562 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.509572 | orchestrator | 2026-04-16 07:47:06.509581 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-04-16 07:47:06.509591 | orchestrator | Thursday 16 April 2026 07:46:06 +0000 (0:00:05.282) 0:00:13.471 ******** 2026-04-16 07:47:06.509601 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:47:06.509611 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:47:06.509621 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:47:06.509630 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:47:06.509640 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:47:06.509668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:47:06.509678 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:47:06.509688 | orchestrator | 2026-04-16 07:47:06.509698 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-04-16 07:47:06.509707 | orchestrator | Thursday 16 April 2026 07:46:37 +0000 (0:00:31.229) 0:00:44.700 ******** 2026-04-16 07:47:06.509717 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.509727 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.509737 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.509746 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.509756 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.509766 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.509775 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.509785 | orchestrator | 2026-04-16 07:47:06.509794 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 07:47:06.509804 | orchestrator | Thursday 16 April 2026 07:46:39 +0000 (0:00:01.975) 0:00:46.676 ******** 2026-04-16 07:47:06.509815 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-16 07:47:06.509827 | orchestrator | 2026-04-16 07:47:06.509837 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 07:47:06.509846 | orchestrator | Thursday 16 April 2026 07:46:42 +0000 (0:00:02.635) 0:00:49.311 ******** 2026-04-16 07:47:06.509856 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.509866 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.509875 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.509885 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.509894 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.509904 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.509914 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.509923 | orchestrator | 2026-04-16 07:47:06.509953 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 07:47:06.509964 | orchestrator | Thursday 16 April 2026 07:46:45 +0000 (0:00:02.456) 0:00:51.768 ******** 2026-04-16 07:47:06.509973 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.509983 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.509992 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510002 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510011 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510090 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510100 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510110 | orchestrator | 2026-04-16 07:47:06.510120 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 07:47:06.510129 | orchestrator | Thursday 16 April 2026 07:46:46 +0000 (0:00:01.832) 0:00:53.600 ******** 2026-04-16 07:47:06.510148 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.510158 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.510168 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510178 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510187 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510197 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510206 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510216 | orchestrator | 2026-04-16 07:47:06.510226 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 07:47:06.510236 | orchestrator | Thursday 16 April 2026 07:46:49 +0000 (0:00:02.467) 0:00:56.068 ******** 2026-04-16 07:47:06.510245 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.510255 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.510264 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510274 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510283 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510293 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510303 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510312 | orchestrator | 2026-04-16 07:47:06.510322 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 07:47:06.510332 | orchestrator | Thursday 16 April 2026 07:46:51 +0000 (0:00:01.907) 0:00:57.976 ******** 2026-04-16 07:47:06.510341 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.510351 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.510360 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510399 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510409 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510419 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510429 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510438 | orchestrator | 2026-04-16 07:47:06.510448 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 07:47:06.510458 | orchestrator | Thursday 16 April 2026 07:46:53 +0000 (0:00:02.030) 0:01:00.007 ******** 2026-04-16 07:47:06.510467 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.510476 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.510486 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510496 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510505 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510515 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510525 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510534 | orchestrator | 2026-04-16 07:47:06.510544 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 07:47:06.510554 | orchestrator | Thursday 16 April 2026 07:46:55 +0000 (0:00:01.969) 0:01:01.977 ******** 2026-04-16 07:47:06.510563 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:06.510573 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:06.510583 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:06.510592 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:06.510602 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:06.510611 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:06.510621 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:06.510631 | orchestrator | 2026-04-16 07:47:06.510640 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 07:47:06.510650 | orchestrator | Thursday 16 April 2026 07:46:57 +0000 (0:00:02.019) 0:01:03.996 ******** 2026-04-16 07:47:06.510660 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.510669 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.510679 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510688 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510698 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510708 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510717 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510727 | orchestrator | 2026-04-16 07:47:06.510736 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 07:47:06.510752 | orchestrator | Thursday 16 April 2026 07:46:59 +0000 (0:00:01.906) 0:01:05.902 ******** 2026-04-16 07:47:06.510770 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:47:06.510779 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:47:06.510789 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:47:06.510799 | orchestrator | 2026-04-16 07:47:06.510808 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 07:47:06.510818 | orchestrator | Thursday 16 April 2026 07:47:00 +0000 (0:00:01.615) 0:01:07.518 ******** 2026-04-16 07:47:06.510827 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:06.510837 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:06.510847 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:06.510856 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:06.510866 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:06.510875 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:06.510885 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:06.510894 | orchestrator | 2026-04-16 07:47:06.510904 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 07:47:06.510914 | orchestrator | Thursday 16 April 2026 07:47:02 +0000 (0:00:02.068) 0:01:09.587 ******** 2026-04-16 07:47:06.510923 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:47:06.510933 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:47:06.510943 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:47:06.510952 | orchestrator | 2026-04-16 07:47:06.510962 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 07:47:06.510972 | orchestrator | Thursday 16 April 2026 07:47:06 +0000 (0:00:03.525) 0:01:13.113 ******** 2026-04-16 07:47:06.510989 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 07:47:28.482417 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 07:47:28.482542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 07:47:28.482558 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.482570 | orchestrator | 2026-04-16 07:47:28.482583 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 07:47:28.482595 | orchestrator | Thursday 16 April 2026 07:47:07 +0000 (0:00:01.363) 0:01:14.477 ******** 2026-04-16 07:47:28.482608 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 07:47:28.482622 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 07:47:28.482633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 07:47:28.482644 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.482655 | orchestrator | 2026-04-16 07:47:28.482666 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 07:47:28.482678 | orchestrator | Thursday 16 April 2026 07:47:09 +0000 (0:00:01.916) 0:01:16.393 ******** 2026-04-16 07:47:28.482691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:28.482705 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:28.482738 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:28.482750 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.482761 | orchestrator | 2026-04-16 07:47:28.482772 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 07:47:28.482783 | orchestrator | Thursday 16 April 2026 07:47:10 +0000 (0:00:01.212) 0:01:17.605 ******** 2026-04-16 07:47:28.482810 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7ecc09e53bd0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 07:47:03.485243', 'end': '2026-04-16 07:47:03.539004', 'delta': '0:00:00.053761', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7ecc09e53bd0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 07:47:28.482847 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'deb83ba22d33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 07:47:04.335490', 'end': '2026-04-16 07:47:04.373754', 'delta': '0:00:00.038264', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['deb83ba22d33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 07:47:28.482859 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8eb997055eb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 07:47:04.889713', 'end': '2026-04-16 07:47:04.946834', 'delta': '0:00:00.057121', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8eb997055eb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 07:47:28.482871 | orchestrator | 2026-04-16 07:47:28.482882 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 07:47:28.482894 | orchestrator | Thursday 16 April 2026 07:47:12 +0000 (0:00:01.191) 0:01:18.797 ******** 2026-04-16 07:47:28.482905 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:28.482917 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:28.482928 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:28.482939 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:28.482949 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:28.482968 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:28.482979 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:28.482989 | orchestrator | 2026-04-16 07:47:28.483001 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 07:47:28.483012 | orchestrator | Thursday 16 April 2026 07:47:14 +0000 (0:00:02.156) 0:01:20.953 ******** 2026-04-16 07:47:28.483023 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.483034 | orchestrator | 2026-04-16 07:47:28.483045 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 07:47:28.483056 | orchestrator | Thursday 16 April 2026 07:47:15 +0000 (0:00:01.220) 0:01:22.174 ******** 2026-04-16 07:47:28.483067 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:28.483078 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:28.483089 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:28.483099 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:28.483110 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:28.483120 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:28.483131 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:28.483142 | orchestrator | 2026-04-16 07:47:28.483153 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 07:47:28.483164 | orchestrator | Thursday 16 April 2026 07:47:17 +0000 (0:00:02.063) 0:01:24.237 ******** 2026-04-16 07:47:28.483174 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:28.483185 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:47:28.483197 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:47:28.483207 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:47:28.483218 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:47:28.483229 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-04-16 07:47:28.483240 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:47:28.483250 | orchestrator | 2026-04-16 07:47:28.483261 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 07:47:28.483272 | orchestrator | Thursday 16 April 2026 07:47:20 +0000 (0:00:03.498) 0:01:27.735 ******** 2026-04-16 07:47:28.483283 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:28.483294 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:28.483305 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:28.483315 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:28.483331 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:28.483342 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:28.483353 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:28.483385 | orchestrator | 2026-04-16 07:47:28.483397 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 07:47:28.483407 | orchestrator | Thursday 16 April 2026 07:47:22 +0000 (0:00:02.011) 0:01:29.747 ******** 2026-04-16 07:47:28.483419 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.483429 | orchestrator | 2026-04-16 07:47:28.483440 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 07:47:28.483451 | orchestrator | Thursday 16 April 2026 07:47:24 +0000 (0:00:01.094) 0:01:30.841 ******** 2026-04-16 07:47:28.483462 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.483473 | orchestrator | 2026-04-16 07:47:28.483484 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 07:47:28.483494 | orchestrator | Thursday 16 April 2026 07:47:25 +0000 (0:00:01.204) 0:01:32.045 ******** 2026-04-16 07:47:28.483505 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.483516 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:28.483527 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:28.483538 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:28.483549 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:28.483560 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:28.483570 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:28.483581 | orchestrator | 2026-04-16 07:47:28.483592 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 07:47:28.483610 | orchestrator | Thursday 16 April 2026 07:47:27 +0000 (0:00:02.360) 0:01:34.406 ******** 2026-04-16 07:47:28.483620 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:28.483631 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:28.483642 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:28.483653 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:28.483664 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:28.483674 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:28.483692 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:39.960411 | orchestrator | 2026-04-16 07:47:39.960509 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 07:47:39.960521 | orchestrator | Thursday 16 April 2026 07:47:29 +0000 (0:00:01.952) 0:01:36.359 ******** 2026-04-16 07:47:39.960529 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:39.960536 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:39.960543 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:39.960550 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:39.960558 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:39.960569 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:39.960580 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:39.960591 | orchestrator | 2026-04-16 07:47:39.960603 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 07:47:39.960614 | orchestrator | Thursday 16 April 2026 07:47:31 +0000 (0:00:02.048) 0:01:38.407 ******** 2026-04-16 07:47:39.960626 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:39.960637 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:39.960648 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:39.960659 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:39.960671 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:39.960678 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:39.960685 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:39.960692 | orchestrator | 2026-04-16 07:47:39.960699 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 07:47:39.960706 | orchestrator | Thursday 16 April 2026 07:47:33 +0000 (0:00:02.001) 0:01:40.409 ******** 2026-04-16 07:47:39.960713 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:39.960720 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:39.960727 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:39.960734 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:39.960740 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:39.960747 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:39.960754 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:39.960760 | orchestrator | 2026-04-16 07:47:39.960767 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 07:47:39.960774 | orchestrator | Thursday 16 April 2026 07:47:35 +0000 (0:00:02.163) 0:01:42.573 ******** 2026-04-16 07:47:39.960781 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:39.960787 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:39.960794 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:39.960800 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:39.960807 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:39.960813 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:39.960820 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:39.960827 | orchestrator | 2026-04-16 07:47:39.960833 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 07:47:39.960841 | orchestrator | Thursday 16 April 2026 07:47:37 +0000 (0:00:01.877) 0:01:44.450 ******** 2026-04-16 07:47:39.960847 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:39.960854 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:39.960861 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:39.960867 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:39.960874 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:39.960904 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:39.960911 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:39.960918 | orchestrator | 2026-04-16 07:47:39.960924 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 07:47:39.960931 | orchestrator | Thursday 16 April 2026 07:47:39 +0000 (0:00:02.087) 0:01:46.537 ******** 2026-04-16 07:47:39.960941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.960964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.960972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.960998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:39.961008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.961017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.961025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.961040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:39.961055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:39.961068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:40.139722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139774 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:40.139813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b3387fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.139836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.139900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:40.139920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a571ce0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.453860 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:40.453869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453904 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:40.453912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}})  2026-04-16 07:47:40.453926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.453939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}})  2026-04-16 07:47:40.453960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.453981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}})  2026-04-16 07:47:40.453994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.596810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.596929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}})  2026-04-16 07:47:40.596948 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.596988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:40.597002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:40.597026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597066 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.597152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}})  2026-04-16 07:47:40.597166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}})  2026-04-16 07:47:40.597193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}})  2026-04-16 07:47:40.701601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}})  2026-04-16 07:47:40.701730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.701782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.701806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.701852 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.701901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.701916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.701927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.701947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}})  2026-04-16 07:47:40.701970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.866583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866622 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}})  2026-04-16 07:47:40.866630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866654 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:40.866662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866669 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:40.866675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:40.866704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866728 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}})  2026-04-16 07:47:40.866747 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.866759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}})  2026-04-16 07:47:40.913714 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913816 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-33-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:47:40.913826 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.913876 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913891 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:40.913910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b594b91e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:47:40.913927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:42.316972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 07:47:42.317086 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:42.317113 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:47:42.317136 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:42.317152 | orchestrator | 2026-04-16 07:47:42.317169 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 07:47:42.317189 | orchestrator | Thursday 16 April 2026 07:47:42 +0000 (0:00:02.373) 0:01:48.910 ******** 2026-04-16 07:47:42.317242 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317294 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317315 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317337 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317449 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317466 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317485 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.317534 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425476 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425623 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425635 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425647 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425659 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425687 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425704 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425724 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b3387fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425738 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.425756 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736644 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:42.736783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736802 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736813 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736825 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736836 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736847 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736881 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736904 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a571ce0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736917 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736928 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:42.736938 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.736969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894837 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:42.894847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894907 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894936 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894942 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894948 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894954 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.894969 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982294 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982489 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982551 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982596 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:42.982620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100172 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100203 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100252 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100262 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100274 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:43.100286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.100317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272437 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272534 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272547 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:43.272560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272577 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272589 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.272610 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.356655 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.356816 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.356845 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.356883 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.356903 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-33-09-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.356985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.357013 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.357051 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.357102 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.357124 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.357145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:43.357187 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059530 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b594b91e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part16', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part14', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part15', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part1', 'scsi-SQEMU_QEMU_HARDDISK_b594b91e-33b3-4c29-b9e6-3b2f15c3c19e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059740 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059777 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059796 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059809 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:57.059827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:47:57.059844 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:57.059868 | orchestrator | 2026-04-16 07:47:57.059886 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 07:47:57.059901 | orchestrator | Thursday 16 April 2026 07:47:44 +0000 (0:00:02.413) 0:01:51.324 ******** 2026-04-16 07:47:57.059914 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:57.059930 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:57.059943 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:57.059956 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:57.059971 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:57.059986 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:57.060011 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:57.060025 | orchestrator | 2026-04-16 07:47:57.060040 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 07:47:57.060055 | orchestrator | Thursday 16 April 2026 07:47:47 +0000 (0:00:02.585) 0:01:53.909 ******** 2026-04-16 07:47:57.060070 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:57.060084 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:57.060099 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:57.060114 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:57.060129 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:57.060143 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:57.060159 | orchestrator | ok: [testbed-manager] 2026-04-16 07:47:57.060174 | orchestrator | 2026-04-16 07:47:57.060190 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 07:47:57.060205 | orchestrator | Thursday 16 April 2026 07:47:49 +0000 (0:00:01.965) 0:01:55.875 ******** 2026-04-16 07:47:57.060221 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:47:57.060236 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:47:57.060252 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:47:57.060267 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:47:57.060282 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:57.060297 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:47:57.060311 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:47:57.060325 | orchestrator | 2026-04-16 07:47:57.060340 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 07:47:57.060379 | orchestrator | Thursday 16 April 2026 07:47:51 +0000 (0:00:02.595) 0:01:58.471 ******** 2026-04-16 07:47:57.060394 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:57.060409 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:57.060423 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:57.060438 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:57.060453 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:57.060468 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:57.060483 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:57.060498 | orchestrator | 2026-04-16 07:47:57.060514 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 07:47:57.060528 | orchestrator | Thursday 16 April 2026 07:47:53 +0000 (0:00:01.860) 0:02:00.332 ******** 2026-04-16 07:47:57.060543 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:57.060552 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:57.060561 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:57.060569 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:57.060578 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:57.060586 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:57.060595 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-04-16 07:47:57.060603 | orchestrator | 2026-04-16 07:47:57.060612 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 07:47:57.060621 | orchestrator | Thursday 16 April 2026 07:47:56 +0000 (0:00:02.677) 0:02:03.009 ******** 2026-04-16 07:47:57.060629 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:47:57.060638 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:47:57.060647 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:47:57.060655 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:47:57.060664 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:47:57.060672 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:47:57.060681 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:47:57.060690 | orchestrator | 2026-04-16 07:47:57.060698 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 07:48:27.874169 | orchestrator | Thursday 16 April 2026 07:47:58 +0000 (0:00:01.842) 0:02:04.852 ******** 2026-04-16 07:48:27.874291 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:48:27.874305 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-16 07:48:27.874315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 07:48:27.874386 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-16 07:48:27.874397 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:48:27.874406 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 07:48:27.874415 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-16 07:48:27.874436 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-16 07:48:27.874445 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-16 07:48:27.874453 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 07:48:27.874462 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-16 07:48:27.874471 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-16 07:48:27.874480 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-16 07:48:27.874489 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-16 07:48:27.874498 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-04-16 07:48:27.874506 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-16 07:48:27.874515 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-16 07:48:27.874523 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-16 07:48:27.874532 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-16 07:48:27.874540 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-04-16 07:48:27.874549 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-04-16 07:48:27.874558 | orchestrator | 2026-04-16 07:48:27.874567 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 07:48:27.874576 | orchestrator | Thursday 16 April 2026 07:48:01 +0000 (0:00:03.313) 0:02:08.165 ******** 2026-04-16 07:48:27.874585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 07:48:27.874595 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 07:48:27.874603 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 07:48:27.874612 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:48:27.874621 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 07:48:27.874629 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 07:48:27.874638 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 07:48:27.874646 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:48:27.874656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 07:48:27.874667 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 07:48:27.874676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 07:48:27.874686 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:48:27.874696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 07:48:27.874706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 07:48:27.874715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 07:48:27.874725 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.874734 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 07:48:27.874744 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 07:48:27.874754 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 07:48:27.874764 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:48:27.874774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 07:48:27.874783 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 07:48:27.874791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 07:48:27.874800 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:48:27.874809 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-04-16 07:48:27.874817 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-04-16 07:48:27.874832 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-04-16 07:48:27.874841 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:48:27.874850 | orchestrator | 2026-04-16 07:48:27.874859 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 07:48:27.874868 | orchestrator | Thursday 16 April 2026 07:48:03 +0000 (0:00:01.953) 0:02:10.118 ******** 2026-04-16 07:48:27.874876 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:48:27.874885 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:48:27.874894 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:48:27.874902 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:48:27.874911 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:48:27.874920 | orchestrator | 2026-04-16 07:48:27.874929 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 07:48:27.874939 | orchestrator | Thursday 16 April 2026 07:48:05 +0000 (0:00:01.960) 0:02:12.079 ******** 2026-04-16 07:48:27.874947 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.874956 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:48:27.874964 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:48:27.874973 | orchestrator | 2026-04-16 07:48:27.874981 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 07:48:27.874990 | orchestrator | Thursday 16 April 2026 07:48:06 +0000 (0:00:01.515) 0:02:13.594 ******** 2026-04-16 07:48:27.874999 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.875007 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:48:27.875031 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:48:27.875041 | orchestrator | 2026-04-16 07:48:27.875050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 07:48:27.875059 | orchestrator | Thursday 16 April 2026 07:48:08 +0000 (0:00:01.353) 0:02:14.948 ******** 2026-04-16 07:48:27.875067 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.875076 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:48:27.875085 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:48:27.875093 | orchestrator | 2026-04-16 07:48:27.875102 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 07:48:27.875111 | orchestrator | Thursday 16 April 2026 07:48:09 +0000 (0:00:01.367) 0:02:16.316 ******** 2026-04-16 07:48:27.875120 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:48:27.875129 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:48:27.875142 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:48:27.875151 | orchestrator | 2026-04-16 07:48:27.875160 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 07:48:27.875169 | orchestrator | Thursday 16 April 2026 07:48:11 +0000 (0:00:01.474) 0:02:17.790 ******** 2026-04-16 07:48:27.875178 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 07:48:27.875187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 07:48:27.875195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 07:48:27.875204 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.875213 | orchestrator | 2026-04-16 07:48:27.875222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 07:48:27.875231 | orchestrator | Thursday 16 April 2026 07:48:12 +0000 (0:00:01.631) 0:02:19.422 ******** 2026-04-16 07:48:27.875239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 07:48:27.875248 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 07:48:27.875257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 07:48:27.875266 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.875274 | orchestrator | 2026-04-16 07:48:27.875283 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 07:48:27.875292 | orchestrator | Thursday 16 April 2026 07:48:14 +0000 (0:00:01.780) 0:02:21.203 ******** 2026-04-16 07:48:27.875306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 07:48:27.875315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 07:48:27.875324 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 07:48:27.875367 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:48:27.875376 | orchestrator | 2026-04-16 07:48:27.875385 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 07:48:27.875394 | orchestrator | Thursday 16 April 2026 07:48:16 +0000 (0:00:01.638) 0:02:22.841 ******** 2026-04-16 07:48:27.875403 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:48:27.875411 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:48:27.875420 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:48:27.875428 | orchestrator | 2026-04-16 07:48:27.875437 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 07:48:27.875446 | orchestrator | Thursday 16 April 2026 07:48:17 +0000 (0:00:01.360) 0:02:24.202 ******** 2026-04-16 07:48:27.875455 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 07:48:27.875463 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 07:48:27.875472 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 07:48:27.875480 | orchestrator | 2026-04-16 07:48:27.875494 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 07:48:27.875508 | orchestrator | Thursday 16 April 2026 07:48:19 +0000 (0:00:01.596) 0:02:25.799 ******** 2026-04-16 07:48:27.875523 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:48:27.875536 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:48:27.875551 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:48:27.875565 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:48:27.875578 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:48:27.875592 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:48:27.875606 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:48:27.875619 | orchestrator | 2026-04-16 07:48:27.875633 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 07:48:27.875647 | orchestrator | Thursday 16 April 2026 07:48:21 +0000 (0:00:02.002) 0:02:27.801 ******** 2026-04-16 07:48:27.875661 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:48:27.875676 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:48:27.875690 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:48:27.875704 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:48:27.875718 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:48:27.875733 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:48:27.875748 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:48:27.875762 | orchestrator | 2026-04-16 07:48:27.875776 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-04-16 07:48:27.875786 | orchestrator | Thursday 16 April 2026 07:48:23 +0000 (0:00:02.851) 0:02:30.653 ******** 2026-04-16 07:48:27.875794 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:48:27.875803 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:48:27.875812 | orchestrator | changed: [testbed-manager] 2026-04-16 07:48:27.875837 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:49:09.447811 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:49:09.447929 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:49:09.447946 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:49:09.447982 | orchestrator | 2026-04-16 07:49:09.447995 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-04-16 07:49:09.448008 | orchestrator | Thursday 16 April 2026 07:48:35 +0000 (0:00:11.437) 0:02:42.091 ******** 2026-04-16 07:49:09.448019 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.448037 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.448064 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.448084 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.448102 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.448120 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.448155 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.448173 | orchestrator | 2026-04-16 07:49:09.448191 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-04-16 07:49:09.448209 | orchestrator | Thursday 16 April 2026 07:48:37 +0000 (0:00:02.054) 0:02:44.145 ******** 2026-04-16 07:49:09.448225 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.448242 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.448258 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.448274 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.448290 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.448306 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.448358 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.448380 | orchestrator | 2026-04-16 07:49:09.448399 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-04-16 07:49:09.448420 | orchestrator | Thursday 16 April 2026 07:48:39 +0000 (0:00:01.808) 0:02:45.953 ******** 2026-04-16 07:49:09.448439 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.448460 | orchestrator | changed: [testbed-node-2] 2026-04-16 07:49:09.448480 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:49:09.448498 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:49:09.448516 | orchestrator | changed: [testbed-node-3] 2026-04-16 07:49:09.448534 | orchestrator | changed: [testbed-node-4] 2026-04-16 07:49:09.448552 | orchestrator | changed: [testbed-node-5] 2026-04-16 07:49:09.448570 | orchestrator | 2026-04-16 07:49:09.448589 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-04-16 07:49:09.448608 | orchestrator | Thursday 16 April 2026 07:48:41 +0000 (0:00:02.642) 0:02:48.595 ******** 2026-04-16 07:49:09.448627 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-16 07:49:09.448648 | orchestrator | 2026-04-16 07:49:09.448668 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-04-16 07:49:09.448686 | orchestrator | Thursday 16 April 2026 07:48:44 +0000 (0:00:02.485) 0:02:51.080 ******** 2026-04-16 07:49:09.448705 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.448723 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.448743 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.448761 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.448775 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.448786 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.448797 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.448808 | orchestrator | 2026-04-16 07:49:09.448818 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-04-16 07:49:09.448830 | orchestrator | Thursday 16 April 2026 07:48:46 +0000 (0:00:01.957) 0:02:53.038 ******** 2026-04-16 07:49:09.448840 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.448851 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.448862 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.448873 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.448883 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.448894 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.448905 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.448916 | orchestrator | 2026-04-16 07:49:09.448940 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-04-16 07:49:09.448952 | orchestrator | Thursday 16 April 2026 07:48:48 +0000 (0:00:02.007) 0:02:55.046 ******** 2026-04-16 07:49:09.448962 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.448973 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.448984 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.448995 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449005 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449016 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449027 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449038 | orchestrator | 2026-04-16 07:49:09.449049 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-04-16 07:49:09.449060 | orchestrator | Thursday 16 April 2026 07:48:50 +0000 (0:00:01.869) 0:02:56.915 ******** 2026-04-16 07:49:09.449071 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449082 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449093 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449103 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449114 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449124 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449135 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449146 | orchestrator | 2026-04-16 07:49:09.449157 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-04-16 07:49:09.449168 | orchestrator | Thursday 16 April 2026 07:48:52 +0000 (0:00:02.116) 0:02:59.031 ******** 2026-04-16 07:49:09.449179 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449189 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449200 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449211 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449221 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449232 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449243 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449254 | orchestrator | 2026-04-16 07:49:09.449265 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-04-16 07:49:09.449277 | orchestrator | Thursday 16 April 2026 07:48:54 +0000 (0:00:01.880) 0:03:00.911 ******** 2026-04-16 07:49:09.449309 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449351 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449362 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449373 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449384 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449395 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449406 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449416 | orchestrator | 2026-04-16 07:49:09.449428 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-04-16 07:49:09.449439 | orchestrator | Thursday 16 April 2026 07:48:56 +0000 (0:00:02.161) 0:03:03.073 ******** 2026-04-16 07:49:09.449450 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449461 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449480 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449491 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449502 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449513 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449523 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449534 | orchestrator | 2026-04-16 07:49:09.449545 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-04-16 07:49:09.449556 | orchestrator | Thursday 16 April 2026 07:48:58 +0000 (0:00:01.900) 0:03:04.974 ******** 2026-04-16 07:49:09.449567 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449578 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449589 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449600 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449618 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449629 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449639 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449650 | orchestrator | 2026-04-16 07:49:09.449662 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-04-16 07:49:09.449673 | orchestrator | Thursday 16 April 2026 07:49:00 +0000 (0:00:02.269) 0:03:07.243 ******** 2026-04-16 07:49:09.449684 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449695 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449705 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449716 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449727 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449738 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449749 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449760 | orchestrator | 2026-04-16 07:49:09.449771 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-04-16 07:49:09.449782 | orchestrator | Thursday 16 April 2026 07:49:02 +0000 (0:00:02.016) 0:03:09.260 ******** 2026-04-16 07:49:09.449792 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449803 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449814 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449825 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449836 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449846 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449857 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449868 | orchestrator | 2026-04-16 07:49:09.449879 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-04-16 07:49:09.449890 | orchestrator | Thursday 16 April 2026 07:49:04 +0000 (0:00:02.020) 0:03:11.280 ******** 2026-04-16 07:49:09.449901 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.449912 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.449922 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.449933 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.449944 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.449955 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.449966 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.449977 | orchestrator | 2026-04-16 07:49:09.449987 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-04-16 07:49:09.449999 | orchestrator | Thursday 16 April 2026 07:49:06 +0000 (0:00:02.003) 0:03:13.283 ******** 2026-04-16 07:49:09.450010 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.450110 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.450129 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.450149 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.450167 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.450186 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:09.450201 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:09.450212 | orchestrator | 2026-04-16 07:49:09.450223 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-04-16 07:49:09.450234 | orchestrator | Thursday 16 April 2026 07:49:08 +0000 (0:00:02.034) 0:03:15.318 ******** 2026-04-16 07:49:09.450246 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:09.450265 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:09.450283 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:09.450304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 07:49:09.450347 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 07:49:09.450366 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:09.450384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 07:49:09.450421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 07:49:09.450439 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:09.450458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 07:49:09.450523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 07:49:35.581667 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.581778 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.581793 | orchestrator | 2026-04-16 07:49:35.581805 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-04-16 07:49:35.581818 | orchestrator | Thursday 16 April 2026 07:49:10 +0000 (0:00:02.098) 0:03:17.417 ******** 2026-04-16 07:49:35.581830 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.581842 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.581853 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.581865 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.581893 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.581905 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.581916 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.581927 | orchestrator | 2026-04-16 07:49:35.581939 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-04-16 07:49:35.581951 | orchestrator | Thursday 16 April 2026 07:49:12 +0000 (0:00:01.896) 0:03:19.314 ******** 2026-04-16 07:49:35.581963 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.581974 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.581985 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.581997 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.582008 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.582080 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.582093 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.582105 | orchestrator | 2026-04-16 07:49:35.582117 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-04-16 07:49:35.582128 | orchestrator | Thursday 16 April 2026 07:49:14 +0000 (0:00:02.348) 0:03:21.662 ******** 2026-04-16 07:49:35.582139 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.582151 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.582162 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.582182 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.582194 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.582206 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.582220 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.582233 | orchestrator | 2026-04-16 07:49:35.582247 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-04-16 07:49:35.582260 | orchestrator | Thursday 16 April 2026 07:49:16 +0000 (0:00:01.941) 0:03:23.603 ******** 2026-04-16 07:49:35.582273 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.582287 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.582300 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.582337 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.582350 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.582364 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.582378 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.582391 | orchestrator | 2026-04-16 07:49:35.582405 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-04-16 07:49:35.582418 | orchestrator | Thursday 16 April 2026 07:49:19 +0000 (0:00:02.177) 0:03:25.781 ******** 2026-04-16 07:49:35.582431 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.582467 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.582481 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.582494 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.582507 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.582520 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.582533 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.582544 | orchestrator | 2026-04-16 07:49:35.582556 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-04-16 07:49:35.582568 | orchestrator | Thursday 16 April 2026 07:49:21 +0000 (0:00:02.156) 0:03:27.938 ******** 2026-04-16 07:49:35.582579 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.582591 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.582602 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.582614 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.582625 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.582637 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.582648 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.582660 | orchestrator | 2026-04-16 07:49:35.582672 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-04-16 07:49:35.582683 | orchestrator | Thursday 16 April 2026 07:49:23 +0000 (0:00:01.835) 0:03:29.774 ******** 2026-04-16 07:49:35.582695 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:35.582707 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:35.582719 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:35.582730 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:35.582743 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:49:35.582755 | orchestrator | 2026-04-16 07:49:35.582767 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-04-16 07:49:35.582778 | orchestrator | Thursday 16 April 2026 07:49:25 +0000 (0:00:02.452) 0:03:32.227 ******** 2026-04-16 07:49:35.582790 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:49:35.582802 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:49:35.582814 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:49:35.582826 | orchestrator | 2026-04-16 07:49:35.582837 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-04-16 07:49:35.582849 | orchestrator | Thursday 16 April 2026 07:49:26 +0000 (0:00:01.382) 0:03:33.609 ******** 2026-04-16 07:49:35.582861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 07:49:35.582875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 07:49:35.582887 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.582898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 07:49:35.582933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 07:49:35.582945 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.582957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 07:49:35.582975 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 07:49:35.582986 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.582998 | orchestrator | 2026-04-16 07:49:35.583010 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-04-16 07:49:35.583022 | orchestrator | Thursday 16 April 2026 07:49:28 +0000 (0:00:01.347) 0:03:34.957 ******** 2026-04-16 07:49:35.583044 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:35.583058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:35.583070 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.583083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:35.583095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:35.583107 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.583119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:35.583131 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:35.583143 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.583155 | orchestrator | 2026-04-16 07:49:35.583166 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-04-16 07:49:35.583178 | orchestrator | Thursday 16 April 2026 07:49:29 +0000 (0:00:01.545) 0:03:36.503 ******** 2026-04-16 07:49:35.583190 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.583202 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.583213 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.583225 | orchestrator | 2026-04-16 07:49:35.583237 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-04-16 07:49:35.583248 | orchestrator | Thursday 16 April 2026 07:49:31 +0000 (0:00:01.303) 0:03:37.806 ******** 2026-04-16 07:49:35.583260 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.583271 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.583283 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.583295 | orchestrator | 2026-04-16 07:49:35.583392 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-04-16 07:49:35.583405 | orchestrator | Thursday 16 April 2026 07:49:32 +0000 (0:00:01.314) 0:03:39.121 ******** 2026-04-16 07:49:35.583416 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.583427 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.583438 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.583449 | orchestrator | 2026-04-16 07:49:35.583460 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-04-16 07:49:35.583470 | orchestrator | Thursday 16 April 2026 07:49:33 +0000 (0:00:01.393) 0:03:40.514 ******** 2026-04-16 07:49:35.583481 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:35.583492 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:35.583510 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:35.583521 | orchestrator | 2026-04-16 07:49:35.583531 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-04-16 07:49:35.583541 | orchestrator | Thursday 16 April 2026 07:49:35 +0000 (0:00:01.384) 0:03:41.898 ******** 2026-04-16 07:49:35.583557 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}) 2026-04-16 07:49:37.515956 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}) 2026-04-16 07:49:37.516072 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}) 2026-04-16 07:49:37.516087 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}) 2026-04-16 07:49:37.516100 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}) 2026-04-16 07:49:37.516111 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}) 2026-04-16 07:49:37.516122 | orchestrator | 2026-04-16 07:49:37.516135 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-04-16 07:49:37.516148 | orchestrator | Thursday 16 April 2026 07:49:37 +0000 (0:00:02.037) 0:03:43.936 ******** 2026-04-16 07:49:37.516164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9/osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1776318534.3292024, 'mtime': 1776318534.3272023, 'ctime': 1776318534.3272023, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9/osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:37.516182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab/osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1776318553.141544, 'mtime': 1776318553.1375442, 'ctime': 1776318553.1375442, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab/osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:37.516225 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:37.516281 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f/osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1776318536.1384344, 'mtime': 1776318536.1324344, 'ctime': 1776318536.1324344, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f/osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:37.516346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-280a11fd-e83f-54f4-b253-754709c5cdf6/osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1776318554.69875, 'mtime': 1776318554.6917498, 'ctime': 1776318554.6917498, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-280a11fd-e83f-54f4-b253-754709c5cdf6/osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:37.516362 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:37.516374 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-4d9f1eac-7172-5024-9561-d385c629a6f5/osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1776318538.1385458, 'mtime': 1776318538.1335456, 'ctime': 1776318538.1335456, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-4d9f1eac-7172-5024-9561-d385c629a6f5/osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:37.516412 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-44db58af-23ca-547e-81cd-90c78ecf63d9/osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1776318556.3988578, 'mtime': 1776318556.3918576, 'ctime': 1776318556.3918576, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-44db58af-23ca-547e-81cd-90c78ecf63d9/osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.783902 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:48.784051 | orchestrator | 2026-04-16 07:49:48.784076 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-04-16 07:49:48.784089 | orchestrator | Thursday 16 April 2026 07:49:38 +0000 (0:00:01.433) 0:03:45.370 ******** 2026-04-16 07:49:48.784102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 07:49:48.784116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 07:49:48.784127 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:48.784139 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 07:49:48.784150 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 07:49:48.784161 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:48.784171 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 07:49:48.784182 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 07:49:48.784193 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:48.784204 | orchestrator | 2026-04-16 07:49:48.784215 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-04-16 07:49:48.784227 | orchestrator | Thursday 16 April 2026 07:49:39 +0000 (0:00:01.352) 0:03:46.723 ******** 2026-04-16 07:49:48.784241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784291 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:48.784338 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784361 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:48.784373 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784395 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:48.784408 | orchestrator | 2026-04-16 07:49:48.784433 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-04-16 07:49:48.784476 | orchestrator | Thursday 16 April 2026 07:49:41 +0000 (0:00:01.425) 0:03:48.149 ******** 2026-04-16 07:49:48.784498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'})  2026-04-16 07:49:48.784518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'})  2026-04-16 07:49:48.784538 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:48.784580 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'})  2026-04-16 07:49:48.784599 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'})  2026-04-16 07:49:48.784617 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:48.784638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'})  2026-04-16 07:49:48.784658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'})  2026-04-16 07:49:48.784679 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:48.784698 | orchestrator | 2026-04-16 07:49:48.784719 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-04-16 07:49:48.784738 | orchestrator | Thursday 16 April 2026 07:49:43 +0000 (0:00:01.699) 0:03:49.849 ******** 2026-04-16 07:49:48.784758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-c8cebb68-f409-516c-8b4d-2b5a47d5dab9', 'data_vg': 'ceph-c8cebb68-f409-516c-8b4d-2b5a47d5dab9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5d85d6a1-6c0d-5a96-8279-fc702a5664ab', 'data_vg': 'ceph-5d85d6a1-6c0d-5a96-8279-fc702a5664ab'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784814 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:48.784834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7b8b78e2-2212-5c47-abe3-ec23a1e6354f', 'data_vg': 'ceph-7b8b78e2-2212-5c47-abe3-ec23a1e6354f'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-280a11fd-e83f-54f4-b253-754709c5cdf6', 'data_vg': 'ceph-280a11fd-e83f-54f4-b253-754709c5cdf6'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784875 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:48.784894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-4d9f1eac-7172-5024-9561-d385c629a6f5', 'data_vg': 'ceph-4d9f1eac-7172-5024-9561-d385c629a6f5'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784915 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-44db58af-23ca-547e-81cd-90c78ecf63d9', 'data_vg': 'ceph-44db58af-23ca-547e-81cd-90c78ecf63d9'}, 'ansible_loop_var': 'item'})  2026-04-16 07:49:48.784935 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:48.784956 | orchestrator | 2026-04-16 07:49:48.784977 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-04-16 07:49:48.784997 | orchestrator | Thursday 16 April 2026 07:49:44 +0000 (0:00:01.348) 0:03:51.198 ******** 2026-04-16 07:49:48.785015 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:48.785036 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:48.785056 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:48.785076 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:48.785096 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:48.785115 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:48.785132 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:48.785143 | orchestrator | 2026-04-16 07:49:48.785154 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-04-16 07:49:48.785165 | orchestrator | Thursday 16 April 2026 07:49:46 +0000 (0:00:01.833) 0:03:53.031 ******** 2026-04-16 07:49:48.785176 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:48.785186 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:48.785206 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:48.785217 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:48.785229 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 07:49:48.785240 | orchestrator | 2026-04-16 07:49:48.785250 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-04-16 07:49:48.785261 | orchestrator | Thursday 16 April 2026 07:49:48 +0000 (0:00:02.430) 0:03:55.462 ******** 2026-04-16 07:49:48.785283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412660 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:59.412672 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:59.412683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412705 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412737 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:59.412748 | orchestrator | 2026-04-16 07:49:59.412761 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-04-16 07:49:59.412773 | orchestrator | Thursday 16 April 2026 07:49:50 +0000 (0:00:01.375) 0:03:56.838 ******** 2026-04-16 07:49:59.412784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412838 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:59.412849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412860 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412927 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:59.412940 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.412997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413022 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:59.413035 | orchestrator | 2026-04-16 07:49:59.413048 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-04-16 07:49:59.413061 | orchestrator | Thursday 16 April 2026 07:49:51 +0000 (0:00:01.708) 0:03:58.546 ******** 2026-04-16 07:49:59.413073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413150 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413182 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:59.413196 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:59.413215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 07:49:59.413331 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:59.413348 | orchestrator | 2026-04-16 07:49:59.413375 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-04-16 07:49:59.413392 | orchestrator | Thursday 16 April 2026 07:49:53 +0000 (0:00:01.424) 0:03:59.971 ******** 2026-04-16 07:49:59.413409 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:59.413426 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:59.413442 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:59.413460 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:59.413477 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:59.413494 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:59.413510 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:59.413528 | orchestrator | 2026-04-16 07:49:59.413546 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-04-16 07:49:59.413564 | orchestrator | Thursday 16 April 2026 07:49:55 +0000 (0:00:01.827) 0:04:01.799 ******** 2026-04-16 07:49:59.413581 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:59.413600 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:59.413617 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:59.413635 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:59.413652 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:59.413670 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:59.413689 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:59.413707 | orchestrator | 2026-04-16 07:49:59.413725 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-04-16 07:49:59.413752 | orchestrator | Thursday 16 April 2026 07:49:57 +0000 (0:00:02.087) 0:04:03.886 ******** 2026-04-16 07:49:59.413763 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:49:59.413774 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:49:59.413785 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:49:59.413796 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:49:59.413806 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:49:59.413817 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:49:59.413828 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:49:59.413838 | orchestrator | 2026-04-16 07:49:59.413849 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-04-16 07:49:59.413860 | orchestrator | Thursday 16 April 2026 07:49:59 +0000 (0:00:02.149) 0:04:06.036 ******** 2026-04-16 07:49:59.413883 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:08.262203 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:08.262380 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:08.262392 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:08.262401 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:08.262409 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:08.262418 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:08.262426 | orchestrator | 2026-04-16 07:50:08.262436 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-04-16 07:50:08.262446 | orchestrator | Thursday 16 April 2026 07:50:01 +0000 (0:00:01.886) 0:04:07.923 ******** 2026-04-16 07:50:08.262455 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:08.262464 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:08.262472 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:08.262480 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:08.262488 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:08.262496 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:08.262504 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:08.262512 | orchestrator | 2026-04-16 07:50:08.262520 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-04-16 07:50:08.262528 | orchestrator | Thursday 16 April 2026 07:50:03 +0000 (0:00:02.040) 0:04:09.964 ******** 2026-04-16 07:50:08.262536 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:08.262545 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:08.262552 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:08.262560 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:08.262596 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:08.262604 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:08.262612 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:08.262620 | orchestrator | 2026-04-16 07:50:08.262628 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-04-16 07:50:08.262637 | orchestrator | Thursday 16 April 2026 07:50:05 +0000 (0:00:01.867) 0:04:11.832 ******** 2026-04-16 07:50:08.262645 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:08.262652 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:08.262660 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:08.262668 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:08.262678 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:08.262686 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:08.262697 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:08.262705 | orchestrator | 2026-04-16 07:50:08.262715 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-04-16 07:50:08.262724 | orchestrator | Thursday 16 April 2026 07:50:07 +0000 (0:00:02.150) 0:04:13.983 ******** 2026-04-16 07:50:08.262734 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.262746 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:08.262758 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:08.262769 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:08.262778 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:08.262791 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:08.262801 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.262810 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:08.262818 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:08.262826 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:08.262850 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:08.262858 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:08.262867 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:08.262893 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.262902 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:08.262917 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:08.262925 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:08.262933 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:08.262941 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:08.262949 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:08.262957 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.262965 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:08.262974 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:08.262982 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:08.262990 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:08.262998 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:08.263006 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.263014 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:08.263022 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:08.263030 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.263038 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:08.263046 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:08.263054 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:08.263062 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:08.263070 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:08.263083 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:08.263097 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:08.263105 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:08.263118 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.388197 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:12.388397 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:12.388417 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.388433 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.388446 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.388457 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:12.388469 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:12.388481 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.388492 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.388504 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:12.388515 | orchestrator | 2026-04-16 07:50:12.388527 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-04-16 07:50:12.388540 | orchestrator | Thursday 16 April 2026 07:50:09 +0000 (0:00:02.269) 0:04:16.252 ******** 2026-04-16 07:50:12.388552 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:12.388563 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:12.388575 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:12.388586 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:12.388598 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:12.388609 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:12.388620 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:12.388631 | orchestrator | 2026-04-16 07:50:12.388643 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-04-16 07:50:12.388654 | orchestrator | Thursday 16 April 2026 07:50:11 +0000 (0:00:02.047) 0:04:18.300 ******** 2026-04-16 07:50:12.388665 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:12.388677 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.388689 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:12.388732 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:12.388745 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.388757 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.388770 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:12.388782 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:12.388815 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.388826 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:12.388838 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:12.388871 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.388885 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.388896 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:12.388908 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:12.388920 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.388931 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:12.388943 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:12.388954 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.388966 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.388978 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:12.388989 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:12.389001 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.389012 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:12.389024 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:12.389044 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.389056 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:12.389067 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:12.389079 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.389090 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:12.389101 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:12.389118 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:12.389130 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:12.389140 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.389147 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:12.389158 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:48.587896 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.588006 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:48.588018 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.588028 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:48.588039 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-04-16 07:50:48.588049 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-04-16 07:50:48.588058 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-04-16 07:50:48.588068 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:48.588076 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-04-16 07:50:48.588106 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.588119 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-04-16 07:50:48.588133 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-04-16 07:50:48.588146 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.588158 | orchestrator | 2026-04-16 07:50:48.588177 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-04-16 07:50:48.588195 | orchestrator | Thursday 16 April 2026 07:50:13 +0000 (0:00:02.052) 0:04:20.353 ******** 2026-04-16 07:50:48.588209 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.588223 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.588236 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.588251 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.588265 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.588340 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.588354 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.588367 | orchestrator | 2026-04-16 07:50:48.588381 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-04-16 07:50:48.588395 | orchestrator | Thursday 16 April 2026 07:50:16 +0000 (0:00:02.819) 0:04:23.172 ******** 2026-04-16 07:50:48.588408 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.588421 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.588434 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.588448 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.588462 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.588474 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.588483 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.588492 | orchestrator | 2026-04-16 07:50:48.588501 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-04-16 07:50:48.588511 | orchestrator | Thursday 16 April 2026 07:50:18 +0000 (0:00:02.577) 0:04:25.749 ******** 2026-04-16 07:50:48.588520 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.588529 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.588538 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.588547 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.588556 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.588566 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.588575 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.588584 | orchestrator | 2026-04-16 07:50:48.588593 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-16 07:50:48.588602 | orchestrator | Thursday 16 April 2026 07:50:21 +0000 (0:00:02.326) 0:04:28.076 ******** 2026-04-16 07:50:48.588626 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-04-16 07:50:48.588636 | orchestrator | 2026-04-16 07:50:48.588645 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-04-16 07:50:48.588654 | orchestrator | Thursday 16 April 2026 07:50:24 +0000 (0:00:02.692) 0:04:30.768 ******** 2026-04-16 07:50:48.588663 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588673 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588682 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588692 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588720 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588748 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588762 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-04-16 07:50:48.588775 | orchestrator | 2026-04-16 07:50:48.588789 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-04-16 07:50:48.588803 | orchestrator | Thursday 16 April 2026 07:50:26 +0000 (0:00:02.179) 0:04:32.948 ******** 2026-04-16 07:50:48.588817 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.588830 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.588844 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.588853 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.588861 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.588868 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.588876 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.588884 | orchestrator | 2026-04-16 07:50:48.588892 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-04-16 07:50:48.588899 | orchestrator | Thursday 16 April 2026 07:50:28 +0000 (0:00:02.145) 0:04:35.093 ******** 2026-04-16 07:50:48.588911 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.588919 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.588927 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.588934 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.588942 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.588950 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.588958 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.588965 | orchestrator | 2026-04-16 07:50:48.588973 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-04-16 07:50:48.588981 | orchestrator | Thursday 16 April 2026 07:50:30 +0000 (0:00:01.950) 0:04:37.044 ******** 2026-04-16 07:50:48.588989 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:50:48.588997 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:50:48.589005 | orchestrator | ok: [testbed-node-2] 2026-04-16 07:50:48.589012 | orchestrator | ok: [testbed-node-3] 2026-04-16 07:50:48.589020 | orchestrator | ok: [testbed-node-4] 2026-04-16 07:50:48.589028 | orchestrator | ok: [testbed-node-5] 2026-04-16 07:50:48.589036 | orchestrator | ok: [testbed-manager] 2026-04-16 07:50:48.589044 | orchestrator | 2026-04-16 07:50:48.589051 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-04-16 07:50:48.589059 | orchestrator | Thursday 16 April 2026 07:50:32 +0000 (0:00:02.691) 0:04:39.735 ******** 2026-04-16 07:50:48.589067 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.589075 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.589083 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.589090 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.589098 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.589106 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.589113 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.589121 | orchestrator | 2026-04-16 07:50:48.589129 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-16 07:50:48.589137 | orchestrator | Thursday 16 April 2026 07:50:35 +0000 (0:00:02.470) 0:04:42.206 ******** 2026-04-16 07:50:48.589145 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.589152 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:50:48.589160 | orchestrator | skipping: [testbed-node-2] 2026-04-16 07:50:48.589168 | orchestrator | skipping: [testbed-node-3] 2026-04-16 07:50:48.589176 | orchestrator | skipping: [testbed-node-4] 2026-04-16 07:50:48.589183 | orchestrator | skipping: [testbed-node-5] 2026-04-16 07:50:48.589191 | orchestrator | skipping: [testbed-manager] 2026-04-16 07:50:48.589199 | orchestrator | 2026-04-16 07:50:48.589207 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-04-16 07:50:48.589214 | orchestrator | Thursday 16 April 2026 07:50:37 +0000 (0:00:02.366) 0:04:44.572 ******** 2026-04-16 07:50:48.589229 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:50:48.589237 | orchestrator | 2026-04-16 07:50:48.589245 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-04-16 07:50:48.589253 | orchestrator | Thursday 16 April 2026 07:50:40 +0000 (0:00:02.708) 0:04:47.281 ******** 2026-04-16 07:50:48.589261 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:50:48.589269 | orchestrator | 2026-04-16 07:50:48.589298 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-04-16 07:50:48.589307 | orchestrator | 2026-04-16 07:50:48.589315 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 07:50:48.589323 | orchestrator | Thursday 16 April 2026 07:50:41 +0000 (0:00:01.418) 0:04:48.699 ******** 2026-04-16 07:50:48.589331 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:50:48.589339 | orchestrator | 2026-04-16 07:50:48.589346 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 07:50:48.589354 | orchestrator | Thursday 16 April 2026 07:50:43 +0000 (0:00:01.495) 0:04:50.194 ******** 2026-04-16 07:50:48.589363 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:50:48.589370 | orchestrator | 2026-04-16 07:50:48.589378 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-04-16 07:50:48.589392 | orchestrator | Thursday 16 April 2026 07:50:44 +0000 (0:00:01.110) 0:04:51.305 ******** 2026-04-16 07:50:48.589403 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-16 07:50:48.589421 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-16 07:51:22.815750 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-16 07:51:22.815890 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-16 07:51:22.815917 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-16 07:51:22.815938 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}])  2026-04-16 07:51:22.815960 | orchestrator | 2026-04-16 07:51:22.815980 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-16 07:51:22.816033 | orchestrator | 2026-04-16 07:51:22.816053 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-16 07:51:22.816070 | orchestrator | Thursday 16 April 2026 07:50:55 +0000 (0:00:10.831) 0:05:02.136 ******** 2026-04-16 07:51:22.816087 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816106 | orchestrator | 2026-04-16 07:51:22.816124 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-16 07:51:22.816142 | orchestrator | Thursday 16 April 2026 07:50:56 +0000 (0:00:01.482) 0:05:03.619 ******** 2026-04-16 07:51:22.816161 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816179 | orchestrator | 2026-04-16 07:51:22.816197 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-16 07:51:22.816213 | orchestrator | Thursday 16 April 2026 07:50:57 +0000 (0:00:01.098) 0:05:04.718 ******** 2026-04-16 07:51:22.816230 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:22.816249 | orchestrator | 2026-04-16 07:51:22.816303 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-16 07:51:22.816326 | orchestrator | Thursday 16 April 2026 07:50:59 +0000 (0:00:01.181) 0:05:05.900 ******** 2026-04-16 07:51:22.816347 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816368 | orchestrator | 2026-04-16 07:51:22.816388 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 07:51:22.816410 | orchestrator | Thursday 16 April 2026 07:51:00 +0000 (0:00:01.104) 0:05:07.004 ******** 2026-04-16 07:51:22.816431 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-16 07:51:22.816452 | orchestrator | 2026-04-16 07:51:22.816474 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 07:51:22.816495 | orchestrator | Thursday 16 April 2026 07:51:01 +0000 (0:00:01.078) 0:05:08.083 ******** 2026-04-16 07:51:22.816515 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816536 | orchestrator | 2026-04-16 07:51:22.816558 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 07:51:22.816580 | orchestrator | Thursday 16 April 2026 07:51:02 +0000 (0:00:01.462) 0:05:09.545 ******** 2026-04-16 07:51:22.816602 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816623 | orchestrator | 2026-04-16 07:51:22.816645 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 07:51:22.816666 | orchestrator | Thursday 16 April 2026 07:51:03 +0000 (0:00:01.135) 0:05:10.681 ******** 2026-04-16 07:51:22.816688 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816710 | orchestrator | 2026-04-16 07:51:22.816749 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 07:51:22.816770 | orchestrator | Thursday 16 April 2026 07:51:05 +0000 (0:00:01.552) 0:05:12.234 ******** 2026-04-16 07:51:22.816789 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816809 | orchestrator | 2026-04-16 07:51:22.816827 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 07:51:22.816847 | orchestrator | Thursday 16 April 2026 07:51:06 +0000 (0:00:01.120) 0:05:13.355 ******** 2026-04-16 07:51:22.816865 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816885 | orchestrator | 2026-04-16 07:51:22.816905 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 07:51:22.816924 | orchestrator | Thursday 16 April 2026 07:51:07 +0000 (0:00:01.151) 0:05:14.506 ******** 2026-04-16 07:51:22.816945 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.816964 | orchestrator | 2026-04-16 07:51:22.816983 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 07:51:22.817002 | orchestrator | Thursday 16 April 2026 07:51:08 +0000 (0:00:01.193) 0:05:15.700 ******** 2026-04-16 07:51:22.817021 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:22.817040 | orchestrator | 2026-04-16 07:51:22.817090 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 07:51:22.817113 | orchestrator | Thursday 16 April 2026 07:51:10 +0000 (0:00:01.179) 0:05:16.880 ******** 2026-04-16 07:51:22.817131 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.817166 | orchestrator | 2026-04-16 07:51:22.817186 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 07:51:22.817207 | orchestrator | Thursday 16 April 2026 07:51:11 +0000 (0:00:01.135) 0:05:18.016 ******** 2026-04-16 07:51:22.817226 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:51:22.817246 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:51:22.817301 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:51:22.817322 | orchestrator | 2026-04-16 07:51:22.817341 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 07:51:22.817359 | orchestrator | Thursday 16 April 2026 07:51:12 +0000 (0:00:01.678) 0:05:19.695 ******** 2026-04-16 07:51:22.817378 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:22.817389 | orchestrator | 2026-04-16 07:51:22.817400 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 07:51:22.817411 | orchestrator | Thursday 16 April 2026 07:51:14 +0000 (0:00:01.246) 0:05:20.941 ******** 2026-04-16 07:51:22.817421 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:51:22.817433 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:51:22.817443 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:51:22.817454 | orchestrator | 2026-04-16 07:51:22.817465 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 07:51:22.817475 | orchestrator | Thursday 16 April 2026 07:51:18 +0000 (0:00:04.129) 0:05:25.071 ******** 2026-04-16 07:51:22.817486 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 07:51:22.817499 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 07:51:22.817510 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 07:51:22.817521 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:22.817531 | orchestrator | 2026-04-16 07:51:22.817542 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 07:51:22.817553 | orchestrator | Thursday 16 April 2026 07:51:19 +0000 (0:00:01.404) 0:05:26.476 ******** 2026-04-16 07:51:22.817566 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 07:51:22.817580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 07:51:22.817590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 07:51:22.817601 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:22.817612 | orchestrator | 2026-04-16 07:51:22.817623 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 07:51:22.817634 | orchestrator | Thursday 16 April 2026 07:51:21 +0000 (0:00:01.932) 0:05:28.409 ******** 2026-04-16 07:51:22.817646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:22.817674 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:22.817695 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:22.817706 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:22.817717 | orchestrator | 2026-04-16 07:51:22.817728 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 07:51:22.817751 | orchestrator | Thursday 16 April 2026 07:51:22 +0000 (0:00:01.152) 0:05:29.561 ******** 2026-04-16 07:51:41.207636 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7ecc09e53bd0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 07:51:14.714034', 'end': '2026-04-16 07:51:14.757604', 'delta': '0:00:00.043570', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7ecc09e53bd0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 07:51:41.207715 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'deb83ba22d33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 07:51:15.262585', 'end': '2026-04-16 07:51:15.305532', 'delta': '0:00:00.042947', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['deb83ba22d33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 07:51:41.207723 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '8eb997055eb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 07:51:16.094693', 'end': '2026-04-16 07:51:17.147434', 'delta': '0:00:01.052741', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8eb997055eb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 07:51:41.207728 | orchestrator | 2026-04-16 07:51:41.207734 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 07:51:41.207740 | orchestrator | Thursday 16 April 2026 07:51:23 +0000 (0:00:01.188) 0:05:30.750 ******** 2026-04-16 07:51:41.207745 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:41.207751 | orchestrator | 2026-04-16 07:51:41.207756 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 07:51:41.207761 | orchestrator | Thursday 16 April 2026 07:51:25 +0000 (0:00:01.599) 0:05:32.350 ******** 2026-04-16 07:51:41.207766 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207771 | orchestrator | 2026-04-16 07:51:41.207776 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 07:51:41.207801 | orchestrator | Thursday 16 April 2026 07:51:26 +0000 (0:00:01.217) 0:05:33.568 ******** 2026-04-16 07:51:41.207806 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:41.207810 | orchestrator | 2026-04-16 07:51:41.207815 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 07:51:41.207819 | orchestrator | Thursday 16 April 2026 07:51:27 +0000 (0:00:01.108) 0:05:34.676 ******** 2026-04-16 07:51:41.207824 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-16 07:51:41.207829 | orchestrator | 2026-04-16 07:51:41.207833 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 07:51:41.207838 | orchestrator | Thursday 16 April 2026 07:51:29 +0000 (0:00:02.064) 0:05:36.741 ******** 2026-04-16 07:51:41.207842 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:51:41.207847 | orchestrator | 2026-04-16 07:51:41.207862 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 07:51:41.207866 | orchestrator | Thursday 16 April 2026 07:51:31 +0000 (0:00:01.155) 0:05:37.896 ******** 2026-04-16 07:51:41.207871 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207876 | orchestrator | 2026-04-16 07:51:41.207880 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 07:51:41.207885 | orchestrator | Thursday 16 April 2026 07:51:32 +0000 (0:00:01.088) 0:05:38.984 ******** 2026-04-16 07:51:41.207889 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207894 | orchestrator | 2026-04-16 07:51:41.207899 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 07:51:41.207904 | orchestrator | Thursday 16 April 2026 07:51:33 +0000 (0:00:01.220) 0:05:40.205 ******** 2026-04-16 07:51:41.207908 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207913 | orchestrator | 2026-04-16 07:51:41.207917 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 07:51:41.207922 | orchestrator | Thursday 16 April 2026 07:51:34 +0000 (0:00:01.098) 0:05:41.303 ******** 2026-04-16 07:51:41.207927 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207931 | orchestrator | 2026-04-16 07:51:41.207946 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 07:51:41.207952 | orchestrator | Thursday 16 April 2026 07:51:35 +0000 (0:00:01.190) 0:05:42.494 ******** 2026-04-16 07:51:41.207956 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207961 | orchestrator | 2026-04-16 07:51:41.207965 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 07:51:41.207970 | orchestrator | Thursday 16 April 2026 07:51:36 +0000 (0:00:01.123) 0:05:43.617 ******** 2026-04-16 07:51:41.207974 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207979 | orchestrator | 2026-04-16 07:51:41.207984 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 07:51:41.207988 | orchestrator | Thursday 16 April 2026 07:51:37 +0000 (0:00:01.129) 0:05:44.747 ******** 2026-04-16 07:51:41.207993 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.207997 | orchestrator | 2026-04-16 07:51:41.208002 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 07:51:41.208006 | orchestrator | Thursday 16 April 2026 07:51:39 +0000 (0:00:01.118) 0:05:45.865 ******** 2026-04-16 07:51:41.208011 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.208015 | orchestrator | 2026-04-16 07:51:41.208020 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 07:51:41.208025 | orchestrator | Thursday 16 April 2026 07:51:40 +0000 (0:00:00.915) 0:05:46.781 ******** 2026-04-16 07:51:41.208029 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:41.208034 | orchestrator | 2026-04-16 07:51:41.208038 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 07:51:41.208043 | orchestrator | Thursday 16 April 2026 07:51:41 +0000 (0:00:01.069) 0:05:47.850 ******** 2026-04-16 07:51:41.208049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:41.208060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:41.208065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:41.208071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:51:41.208079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:41.208085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:41.208094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:42.385183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:51:42.385328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:42.385346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:51:42.385358 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:51:42.385369 | orchestrator | 2026-04-16 07:51:42.385393 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 07:51:42.385405 | orchestrator | Thursday 16 April 2026 07:51:42 +0000 (0:00:01.184) 0:05:49.035 ******** 2026-04-16 07:51:42.385417 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385446 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385478 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385496 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385513 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385543 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:51:42.385580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:52:32.462203 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:52:32.462374 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:52:32.462392 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.462405 | orchestrator | 2026-04-16 07:52:32.462431 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 07:52:32.462444 | orchestrator | Thursday 16 April 2026 07:51:43 +0000 (0:00:01.179) 0:05:50.215 ******** 2026-04-16 07:52:32.462455 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:52:32.462467 | orchestrator | 2026-04-16 07:52:32.462478 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 07:52:32.462489 | orchestrator | Thursday 16 April 2026 07:51:44 +0000 (0:00:01.479) 0:05:51.695 ******** 2026-04-16 07:52:32.462499 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:52:32.462510 | orchestrator | 2026-04-16 07:52:32.462521 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 07:52:32.462532 | orchestrator | Thursday 16 April 2026 07:51:46 +0000 (0:00:01.099) 0:05:52.794 ******** 2026-04-16 07:52:32.462542 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:52:32.462553 | orchestrator | 2026-04-16 07:52:32.462564 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 07:52:32.462575 | orchestrator | Thursday 16 April 2026 07:51:47 +0000 (0:00:01.500) 0:05:54.295 ******** 2026-04-16 07:52:32.462586 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.462596 | orchestrator | 2026-04-16 07:52:32.462607 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 07:52:32.462641 | orchestrator | Thursday 16 April 2026 07:51:48 +0000 (0:00:01.201) 0:05:55.496 ******** 2026-04-16 07:52:32.462653 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.462664 | orchestrator | 2026-04-16 07:52:32.462675 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 07:52:32.462686 | orchestrator | Thursday 16 April 2026 07:51:49 +0000 (0:00:01.219) 0:05:56.716 ******** 2026-04-16 07:52:32.462696 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.462707 | orchestrator | 2026-04-16 07:52:32.462718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 07:52:32.462732 | orchestrator | Thursday 16 April 2026 07:51:51 +0000 (0:00:01.167) 0:05:57.884 ******** 2026-04-16 07:52:32.462744 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:52:32.462756 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 07:52:32.462769 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 07:52:32.462783 | orchestrator | 2026-04-16 07:52:32.462795 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 07:52:32.462807 | orchestrator | Thursday 16 April 2026 07:51:53 +0000 (0:00:01.919) 0:05:59.804 ******** 2026-04-16 07:52:32.462820 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 07:52:32.462850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 07:52:32.462862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 07:52:32.462874 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.462900 | orchestrator | 2026-04-16 07:52:32.462920 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 07:52:32.462938 | orchestrator | Thursday 16 April 2026 07:51:54 +0000 (0:00:01.121) 0:06:00.925 ******** 2026-04-16 07:52:32.462956 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.462974 | orchestrator | 2026-04-16 07:52:32.462992 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 07:52:32.463008 | orchestrator | Thursday 16 April 2026 07:51:55 +0000 (0:00:01.165) 0:06:02.091 ******** 2026-04-16 07:52:32.463026 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:52:32.463044 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:52:32.463064 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:52:32.463081 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:52:32.463099 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:52:32.463117 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:52:32.463161 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:52:32.463181 | orchestrator | 2026-04-16 07:52:32.463199 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 07:52:32.463218 | orchestrator | Thursday 16 April 2026 07:51:57 +0000 (0:00:02.052) 0:06:04.144 ******** 2026-04-16 07:52:32.463264 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:52:32.463285 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:52:32.463303 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:52:32.463320 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:52:32.463337 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:52:32.463356 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:52:32.463373 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:52:32.463393 | orchestrator | 2026-04-16 07:52:32.463410 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-16 07:52:32.463446 | orchestrator | Thursday 16 April 2026 07:52:00 +0000 (0:00:02.678) 0:06:06.822 ******** 2026-04-16 07:52:32.463467 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-16 07:52:32.463485 | orchestrator | 2026-04-16 07:52:32.463504 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-16 07:52:32.463522 | orchestrator | Thursday 16 April 2026 07:52:02 +0000 (0:00:02.218) 0:06:09.041 ******** 2026-04-16 07:52:32.463533 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.463544 | orchestrator | 2026-04-16 07:52:32.463565 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-16 07:52:32.463576 | orchestrator | Thursday 16 April 2026 07:52:03 +0000 (0:00:01.208) 0:06:10.249 ******** 2026-04-16 07:52:32.463586 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.463597 | orchestrator | 2026-04-16 07:52:32.463608 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-16 07:52:32.463618 | orchestrator | Thursday 16 April 2026 07:52:04 +0000 (0:00:01.103) 0:06:11.353 ******** 2026-04-16 07:52:32.463629 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-16 07:52:32.463639 | orchestrator | 2026-04-16 07:52:32.463650 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-16 07:52:32.463661 | orchestrator | Thursday 16 April 2026 07:52:06 +0000 (0:00:02.248) 0:06:13.601 ******** 2026-04-16 07:52:32.463671 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:52:32.463682 | orchestrator | 2026-04-16 07:52:32.463693 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-16 07:52:32.463704 | orchestrator | Thursday 16 April 2026 07:52:07 +0000 (0:00:01.095) 0:06:14.697 ******** 2026-04-16 07:52:32.463714 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:52:32.463725 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 07:52:32.463735 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:52:32.463746 | orchestrator | 2026-04-16 07:52:32.463757 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-16 07:52:32.463767 | orchestrator | Thursday 16 April 2026 07:52:10 +0000 (0:00:02.467) 0:06:17.165 ******** 2026-04-16 07:52:32.463778 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-04-16 07:52:32.463789 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-04-16 07:52:32.463801 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-04-16 07:52:32.463812 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-04-16 07:52:32.463822 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-04-16 07:52:32.463833 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-04-16 07:52:32.463844 | orchestrator | 2026-04-16 07:52:32.463854 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-16 07:52:32.463865 | orchestrator | Thursday 16 April 2026 07:52:23 +0000 (0:00:13.399) 0:06:30.565 ******** 2026-04-16 07:52:32.463876 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:52:32.463886 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:52:32.463897 | orchestrator | 2026-04-16 07:52:32.463907 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-16 07:52:32.463918 | orchestrator | Thursday 16 April 2026 07:52:27 +0000 (0:00:04.096) 0:06:34.661 ******** 2026-04-16 07:52:32.463928 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:52:32.463939 | orchestrator | 2026-04-16 07:52:32.463950 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 07:52:32.463970 | orchestrator | Thursday 16 April 2026 07:52:30 +0000 (0:00:02.657) 0:06:37.319 ******** 2026-04-16 07:52:32.463981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-16 07:52:32.463991 | orchestrator | 2026-04-16 07:52:32.464002 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 07:52:32.464013 | orchestrator | Thursday 16 April 2026 07:52:31 +0000 (0:00:01.437) 0:06:38.756 ******** 2026-04-16 07:52:32.464023 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-16 07:52:32.464034 | orchestrator | 2026-04-16 07:52:32.464056 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 07:53:23.799073 | orchestrator | Thursday 16 April 2026 07:52:33 +0000 (0:00:01.532) 0:06:40.288 ******** 2026-04-16 07:53:23.799182 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.799195 | orchestrator | 2026-04-16 07:53:23.799205 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 07:53:23.799215 | orchestrator | Thursday 16 April 2026 07:52:35 +0000 (0:00:01.542) 0:06:41.831 ******** 2026-04-16 07:53:23.799224 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799233 | orchestrator | 2026-04-16 07:53:23.799242 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 07:53:23.799251 | orchestrator | Thursday 16 April 2026 07:52:36 +0000 (0:00:01.095) 0:06:42.927 ******** 2026-04-16 07:53:23.799306 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799315 | orchestrator | 2026-04-16 07:53:23.799325 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 07:53:23.799334 | orchestrator | Thursday 16 April 2026 07:52:37 +0000 (0:00:01.082) 0:06:44.010 ******** 2026-04-16 07:53:23.799343 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799352 | orchestrator | 2026-04-16 07:53:23.799361 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 07:53:23.799370 | orchestrator | Thursday 16 April 2026 07:52:38 +0000 (0:00:01.153) 0:06:45.164 ******** 2026-04-16 07:53:23.799379 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.799388 | orchestrator | 2026-04-16 07:53:23.799397 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 07:53:23.799406 | orchestrator | Thursday 16 April 2026 07:52:39 +0000 (0:00:01.543) 0:06:46.708 ******** 2026-04-16 07:53:23.799415 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799424 | orchestrator | 2026-04-16 07:53:23.799433 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 07:53:23.799465 | orchestrator | Thursday 16 April 2026 07:52:41 +0000 (0:00:01.144) 0:06:47.852 ******** 2026-04-16 07:53:23.799475 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799484 | orchestrator | 2026-04-16 07:53:23.799493 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 07:53:23.799502 | orchestrator | Thursday 16 April 2026 07:52:42 +0000 (0:00:01.106) 0:06:48.958 ******** 2026-04-16 07:53:23.799511 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.799520 | orchestrator | 2026-04-16 07:53:23.799529 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 07:53:23.799538 | orchestrator | Thursday 16 April 2026 07:52:43 +0000 (0:00:01.565) 0:06:50.523 ******** 2026-04-16 07:53:23.799547 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.799556 | orchestrator | 2026-04-16 07:53:23.799565 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 07:53:23.799574 | orchestrator | Thursday 16 April 2026 07:52:45 +0000 (0:00:01.598) 0:06:52.122 ******** 2026-04-16 07:53:23.799583 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799592 | orchestrator | 2026-04-16 07:53:23.799601 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 07:53:23.799610 | orchestrator | Thursday 16 April 2026 07:52:46 +0000 (0:00:01.088) 0:06:53.211 ******** 2026-04-16 07:53:23.799620 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.799630 | orchestrator | 2026-04-16 07:53:23.799661 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 07:53:23.799672 | orchestrator | Thursday 16 April 2026 07:52:47 +0000 (0:00:01.113) 0:06:54.324 ******** 2026-04-16 07:53:23.799681 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799691 | orchestrator | 2026-04-16 07:53:23.799702 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 07:53:23.799715 | orchestrator | Thursday 16 April 2026 07:52:48 +0000 (0:00:01.105) 0:06:55.430 ******** 2026-04-16 07:53:23.799731 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799745 | orchestrator | 2026-04-16 07:53:23.799765 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 07:53:23.799785 | orchestrator | Thursday 16 April 2026 07:52:49 +0000 (0:00:01.109) 0:06:56.540 ******** 2026-04-16 07:53:23.799799 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799813 | orchestrator | 2026-04-16 07:53:23.799827 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 07:53:23.799841 | orchestrator | Thursday 16 April 2026 07:52:50 +0000 (0:00:01.128) 0:06:57.669 ******** 2026-04-16 07:53:23.799856 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799870 | orchestrator | 2026-04-16 07:53:23.799885 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 07:53:23.799899 | orchestrator | Thursday 16 April 2026 07:52:52 +0000 (0:00:01.119) 0:06:58.788 ******** 2026-04-16 07:53:23.799913 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.799927 | orchestrator | 2026-04-16 07:53:23.799941 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 07:53:23.799955 | orchestrator | Thursday 16 April 2026 07:52:53 +0000 (0:00:01.096) 0:06:59.885 ******** 2026-04-16 07:53:23.799970 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.799983 | orchestrator | 2026-04-16 07:53:23.799998 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 07:53:23.800007 | orchestrator | Thursday 16 April 2026 07:52:54 +0000 (0:00:01.138) 0:07:01.024 ******** 2026-04-16 07:53:23.800021 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.800035 | orchestrator | 2026-04-16 07:53:23.800050 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 07:53:23.800064 | orchestrator | Thursday 16 April 2026 07:52:55 +0000 (0:00:01.163) 0:07:02.187 ******** 2026-04-16 07:53:23.800079 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.800094 | orchestrator | 2026-04-16 07:53:23.800109 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 07:53:23.800124 | orchestrator | Thursday 16 April 2026 07:52:56 +0000 (0:00:01.126) 0:07:03.314 ******** 2026-04-16 07:53:23.800138 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800152 | orchestrator | 2026-04-16 07:53:23.800167 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 07:53:23.800185 | orchestrator | Thursday 16 April 2026 07:52:57 +0000 (0:00:01.079) 0:07:04.393 ******** 2026-04-16 07:53:23.800205 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800224 | orchestrator | 2026-04-16 07:53:23.800296 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 07:53:23.800317 | orchestrator | Thursday 16 April 2026 07:52:58 +0000 (0:00:01.118) 0:07:05.512 ******** 2026-04-16 07:53:23.800336 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800355 | orchestrator | 2026-04-16 07:53:23.800373 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 07:53:23.800391 | orchestrator | Thursday 16 April 2026 07:52:59 +0000 (0:00:01.104) 0:07:06.616 ******** 2026-04-16 07:53:23.800410 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800428 | orchestrator | 2026-04-16 07:53:23.800446 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 07:53:23.800465 | orchestrator | Thursday 16 April 2026 07:53:00 +0000 (0:00:01.129) 0:07:07.746 ******** 2026-04-16 07:53:23.800484 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800502 | orchestrator | 2026-04-16 07:53:23.800537 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 07:53:23.800556 | orchestrator | Thursday 16 April 2026 07:53:02 +0000 (0:00:01.093) 0:07:08.839 ******** 2026-04-16 07:53:23.800576 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800595 | orchestrator | 2026-04-16 07:53:23.800613 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 07:53:23.800632 | orchestrator | Thursday 16 April 2026 07:53:03 +0000 (0:00:01.117) 0:07:09.957 ******** 2026-04-16 07:53:23.800650 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800668 | orchestrator | 2026-04-16 07:53:23.800687 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 07:53:23.800708 | orchestrator | Thursday 16 April 2026 07:53:04 +0000 (0:00:01.104) 0:07:11.061 ******** 2026-04-16 07:53:23.800727 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800745 | orchestrator | 2026-04-16 07:53:23.800764 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 07:53:23.800783 | orchestrator | Thursday 16 April 2026 07:53:05 +0000 (0:00:01.177) 0:07:12.239 ******** 2026-04-16 07:53:23.800801 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800820 | orchestrator | 2026-04-16 07:53:23.800839 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 07:53:23.800858 | orchestrator | Thursday 16 April 2026 07:53:06 +0000 (0:00:01.146) 0:07:13.385 ******** 2026-04-16 07:53:23.800877 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800894 | orchestrator | 2026-04-16 07:53:23.800913 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 07:53:23.800933 | orchestrator | Thursday 16 April 2026 07:53:07 +0000 (0:00:01.113) 0:07:14.499 ******** 2026-04-16 07:53:23.800953 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.800972 | orchestrator | 2026-04-16 07:53:23.800990 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 07:53:23.801009 | orchestrator | Thursday 16 April 2026 07:53:08 +0000 (0:00:01.122) 0:07:15.622 ******** 2026-04-16 07:53:23.801027 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.801046 | orchestrator | 2026-04-16 07:53:23.801066 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 07:53:23.801085 | orchestrator | Thursday 16 April 2026 07:53:09 +0000 (0:00:01.115) 0:07:16.737 ******** 2026-04-16 07:53:23.801104 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.801122 | orchestrator | 2026-04-16 07:53:23.801141 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 07:53:23.801160 | orchestrator | Thursday 16 April 2026 07:53:11 +0000 (0:00:01.969) 0:07:18.707 ******** 2026-04-16 07:53:23.801178 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.801197 | orchestrator | 2026-04-16 07:53:23.801216 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 07:53:23.801234 | orchestrator | Thursday 16 April 2026 07:53:14 +0000 (0:00:02.428) 0:07:21.136 ******** 2026-04-16 07:53:23.801252 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-16 07:53:23.801297 | orchestrator | 2026-04-16 07:53:23.801316 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 07:53:23.801333 | orchestrator | Thursday 16 April 2026 07:53:15 +0000 (0:00:01.458) 0:07:22.594 ******** 2026-04-16 07:53:23.801352 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.801370 | orchestrator | 2026-04-16 07:53:23.801388 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 07:53:23.801406 | orchestrator | Thursday 16 April 2026 07:53:16 +0000 (0:00:01.121) 0:07:23.716 ******** 2026-04-16 07:53:23.801424 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.801444 | orchestrator | 2026-04-16 07:53:23.801462 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 07:53:23.801480 | orchestrator | Thursday 16 April 2026 07:53:18 +0000 (0:00:01.095) 0:07:24.811 ******** 2026-04-16 07:53:23.801513 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 07:53:23.801531 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 07:53:23.801550 | orchestrator | 2026-04-16 07:53:23.801569 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 07:53:23.801586 | orchestrator | Thursday 16 April 2026 07:53:19 +0000 (0:00:01.849) 0:07:26.661 ******** 2026-04-16 07:53:23.801605 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:53:23.801624 | orchestrator | 2026-04-16 07:53:23.801643 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 07:53:23.801661 | orchestrator | Thursday 16 April 2026 07:53:21 +0000 (0:00:01.633) 0:07:28.295 ******** 2026-04-16 07:53:23.801679 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.801698 | orchestrator | 2026-04-16 07:53:23.801716 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 07:53:23.801734 | orchestrator | Thursday 16 April 2026 07:53:22 +0000 (0:00:01.121) 0:07:29.416 ******** 2026-04-16 07:53:23.801753 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:53:23.801771 | orchestrator | 2026-04-16 07:53:23.801845 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 07:53:23.801882 | orchestrator | Thursday 16 April 2026 07:53:23 +0000 (0:00:01.127) 0:07:30.544 ******** 2026-04-16 07:54:09.822156 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822271 | orchestrator | 2026-04-16 07:54:09.822288 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 07:54:09.822301 | orchestrator | Thursday 16 April 2026 07:53:24 +0000 (0:00:01.133) 0:07:31.677 ******** 2026-04-16 07:54:09.822374 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-16 07:54:09.822390 | orchestrator | 2026-04-16 07:54:09.822401 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 07:54:09.822412 | orchestrator | Thursday 16 April 2026 07:53:26 +0000 (0:00:01.465) 0:07:33.143 ******** 2026-04-16 07:54:09.822423 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:54:09.822435 | orchestrator | 2026-04-16 07:54:09.822446 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 07:54:09.822457 | orchestrator | Thursday 16 April 2026 07:53:28 +0000 (0:00:01.716) 0:07:34.859 ******** 2026-04-16 07:54:09.822468 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 07:54:09.822479 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 07:54:09.822490 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 07:54:09.822501 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822512 | orchestrator | 2026-04-16 07:54:09.822522 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 07:54:09.822533 | orchestrator | Thursday 16 April 2026 07:53:29 +0000 (0:00:01.109) 0:07:35.969 ******** 2026-04-16 07:54:09.822559 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822570 | orchestrator | 2026-04-16 07:54:09.822581 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 07:54:09.822592 | orchestrator | Thursday 16 April 2026 07:53:30 +0000 (0:00:01.127) 0:07:37.097 ******** 2026-04-16 07:54:09.822603 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822614 | orchestrator | 2026-04-16 07:54:09.822627 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 07:54:09.822640 | orchestrator | Thursday 16 April 2026 07:53:31 +0000 (0:00:01.146) 0:07:38.243 ******** 2026-04-16 07:54:09.822652 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822666 | orchestrator | 2026-04-16 07:54:09.822679 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 07:54:09.822691 | orchestrator | Thursday 16 April 2026 07:53:32 +0000 (0:00:01.112) 0:07:39.356 ******** 2026-04-16 07:54:09.822704 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822738 | orchestrator | 2026-04-16 07:54:09.822752 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 07:54:09.822765 | orchestrator | Thursday 16 April 2026 07:53:33 +0000 (0:00:01.110) 0:07:40.467 ******** 2026-04-16 07:54:09.822777 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822790 | orchestrator | 2026-04-16 07:54:09.822803 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 07:54:09.822815 | orchestrator | Thursday 16 April 2026 07:53:34 +0000 (0:00:01.149) 0:07:41.617 ******** 2026-04-16 07:54:09.822827 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:54:09.822840 | orchestrator | 2026-04-16 07:54:09.822853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 07:54:09.822865 | orchestrator | Thursday 16 April 2026 07:53:37 +0000 (0:00:02.660) 0:07:44.278 ******** 2026-04-16 07:54:09.822877 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:54:09.822889 | orchestrator | 2026-04-16 07:54:09.822901 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 07:54:09.822914 | orchestrator | Thursday 16 April 2026 07:53:38 +0000 (0:00:01.140) 0:07:45.419 ******** 2026-04-16 07:54:09.822926 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-16 07:54:09.822938 | orchestrator | 2026-04-16 07:54:09.822951 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 07:54:09.822963 | orchestrator | Thursday 16 April 2026 07:53:40 +0000 (0:00:01.433) 0:07:46.852 ******** 2026-04-16 07:54:09.822977 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.822987 | orchestrator | 2026-04-16 07:54:09.822998 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 07:54:09.823009 | orchestrator | Thursday 16 April 2026 07:53:41 +0000 (0:00:01.116) 0:07:47.969 ******** 2026-04-16 07:54:09.823020 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823030 | orchestrator | 2026-04-16 07:54:09.823041 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 07:54:09.823052 | orchestrator | Thursday 16 April 2026 07:53:42 +0000 (0:00:01.152) 0:07:49.121 ******** 2026-04-16 07:54:09.823062 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823073 | orchestrator | 2026-04-16 07:54:09.823084 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 07:54:09.823095 | orchestrator | Thursday 16 April 2026 07:53:43 +0000 (0:00:01.135) 0:07:50.257 ******** 2026-04-16 07:54:09.823105 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823116 | orchestrator | 2026-04-16 07:54:09.823127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 07:54:09.823138 | orchestrator | Thursday 16 April 2026 07:53:44 +0000 (0:00:01.111) 0:07:51.368 ******** 2026-04-16 07:54:09.823148 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823159 | orchestrator | 2026-04-16 07:54:09.823170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 07:54:09.823181 | orchestrator | Thursday 16 April 2026 07:53:45 +0000 (0:00:01.120) 0:07:52.489 ******** 2026-04-16 07:54:09.823192 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823202 | orchestrator | 2026-04-16 07:54:09.823213 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 07:54:09.823224 | orchestrator | Thursday 16 April 2026 07:53:46 +0000 (0:00:01.113) 0:07:53.603 ******** 2026-04-16 07:54:09.823235 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823246 | orchestrator | 2026-04-16 07:54:09.823275 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 07:54:09.823286 | orchestrator | Thursday 16 April 2026 07:53:47 +0000 (0:00:01.110) 0:07:54.713 ******** 2026-04-16 07:54:09.823297 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823308 | orchestrator | 2026-04-16 07:54:09.823339 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 07:54:09.823350 | orchestrator | Thursday 16 April 2026 07:53:49 +0000 (0:00:01.107) 0:07:55.821 ******** 2026-04-16 07:54:09.823370 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:54:09.823381 | orchestrator | 2026-04-16 07:54:09.823392 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 07:54:09.823403 | orchestrator | Thursday 16 April 2026 07:53:50 +0000 (0:00:01.108) 0:07:56.929 ******** 2026-04-16 07:54:09.823413 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-16 07:54:09.823425 | orchestrator | 2026-04-16 07:54:09.823436 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 07:54:09.823446 | orchestrator | Thursday 16 April 2026 07:53:51 +0000 (0:00:01.429) 0:07:58.359 ******** 2026-04-16 07:54:09.823457 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-16 07:54:09.823469 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-16 07:54:09.823480 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-16 07:54:09.823491 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-16 07:54:09.823502 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-16 07:54:09.823512 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-16 07:54:09.823528 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-16 07:54:09.823539 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-16 07:54:09.823550 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 07:54:09.823561 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 07:54:09.823572 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 07:54:09.823583 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 07:54:09.823594 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 07:54:09.823605 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 07:54:09.823615 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-16 07:54:09.823627 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-16 07:54:09.823637 | orchestrator | 2026-04-16 07:54:09.823648 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 07:54:09.823659 | orchestrator | Thursday 16 April 2026 07:53:58 +0000 (0:00:07.131) 0:08:05.491 ******** 2026-04-16 07:54:09.823670 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823681 | orchestrator | 2026-04-16 07:54:09.823691 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 07:54:09.823702 | orchestrator | Thursday 16 April 2026 07:53:59 +0000 (0:00:01.094) 0:08:06.586 ******** 2026-04-16 07:54:09.823713 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823724 | orchestrator | 2026-04-16 07:54:09.823734 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 07:54:09.823745 | orchestrator | Thursday 16 April 2026 07:54:00 +0000 (0:00:01.105) 0:08:07.691 ******** 2026-04-16 07:54:09.823756 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823770 | orchestrator | 2026-04-16 07:54:09.823788 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 07:54:09.823807 | orchestrator | Thursday 16 April 2026 07:54:02 +0000 (0:00:01.103) 0:08:08.795 ******** 2026-04-16 07:54:09.823831 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823857 | orchestrator | 2026-04-16 07:54:09.823875 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 07:54:09.823892 | orchestrator | Thursday 16 April 2026 07:54:03 +0000 (0:00:01.138) 0:08:09.934 ******** 2026-04-16 07:54:09.823910 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.823927 | orchestrator | 2026-04-16 07:54:09.823943 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 07:54:09.823961 | orchestrator | Thursday 16 April 2026 07:54:04 +0000 (0:00:01.124) 0:08:11.058 ******** 2026-04-16 07:54:09.823980 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.824012 | orchestrator | 2026-04-16 07:54:09.824031 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 07:54:09.824049 | orchestrator | Thursday 16 April 2026 07:54:05 +0000 (0:00:01.149) 0:08:12.208 ******** 2026-04-16 07:54:09.824067 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.824082 | orchestrator | 2026-04-16 07:54:09.824093 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 07:54:09.824104 | orchestrator | Thursday 16 April 2026 07:54:06 +0000 (0:00:01.119) 0:08:13.327 ******** 2026-04-16 07:54:09.824115 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.824126 | orchestrator | 2026-04-16 07:54:09.824137 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 07:54:09.824148 | orchestrator | Thursday 16 April 2026 07:54:07 +0000 (0:00:01.068) 0:08:14.396 ******** 2026-04-16 07:54:09.824158 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.824169 | orchestrator | 2026-04-16 07:54:09.824188 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 07:54:09.824215 | orchestrator | Thursday 16 April 2026 07:54:08 +0000 (0:00:01.003) 0:08:15.399 ******** 2026-04-16 07:54:09.824235 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.824253 | orchestrator | 2026-04-16 07:54:09.824270 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 07:54:09.824288 | orchestrator | Thursday 16 April 2026 07:54:09 +0000 (0:00:01.036) 0:08:16.436 ******** 2026-04-16 07:54:09.824304 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:54:09.824363 | orchestrator | 2026-04-16 07:54:09.824396 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 07:55:04.700925 | orchestrator | Thursday 16 April 2026 07:54:10 +0000 (0:00:01.103) 0:08:17.540 ******** 2026-04-16 07:55:04.701045 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701061 | orchestrator | 2026-04-16 07:55:04.701074 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 07:55:04.701086 | orchestrator | Thursday 16 April 2026 07:54:11 +0000 (0:00:01.119) 0:08:18.659 ******** 2026-04-16 07:55:04.701097 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701109 | orchestrator | 2026-04-16 07:55:04.701120 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 07:55:04.701131 | orchestrator | Thursday 16 April 2026 07:54:13 +0000 (0:00:01.213) 0:08:19.873 ******** 2026-04-16 07:55:04.701142 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701153 | orchestrator | 2026-04-16 07:55:04.701164 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 07:55:04.701175 | orchestrator | Thursday 16 April 2026 07:54:14 +0000 (0:00:01.132) 0:08:21.005 ******** 2026-04-16 07:55:04.701186 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701197 | orchestrator | 2026-04-16 07:55:04.701208 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 07:55:04.701219 | orchestrator | Thursday 16 April 2026 07:54:15 +0000 (0:00:01.201) 0:08:22.207 ******** 2026-04-16 07:55:04.701230 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701241 | orchestrator | 2026-04-16 07:55:04.701253 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 07:55:04.701264 | orchestrator | Thursday 16 April 2026 07:54:16 +0000 (0:00:01.106) 0:08:23.313 ******** 2026-04-16 07:55:04.701289 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701301 | orchestrator | 2026-04-16 07:55:04.701312 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 07:55:04.701324 | orchestrator | Thursday 16 April 2026 07:54:17 +0000 (0:00:01.099) 0:08:24.412 ******** 2026-04-16 07:55:04.701335 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701346 | orchestrator | 2026-04-16 07:55:04.701357 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 07:55:04.701419 | orchestrator | Thursday 16 April 2026 07:54:18 +0000 (0:00:01.163) 0:08:25.576 ******** 2026-04-16 07:55:04.701433 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701444 | orchestrator | 2026-04-16 07:55:04.701456 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 07:55:04.701470 | orchestrator | Thursday 16 April 2026 07:54:19 +0000 (0:00:01.121) 0:08:26.697 ******** 2026-04-16 07:55:04.701483 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701495 | orchestrator | 2026-04-16 07:55:04.701508 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 07:55:04.701520 | orchestrator | Thursday 16 April 2026 07:54:21 +0000 (0:00:01.130) 0:08:27.828 ******** 2026-04-16 07:55:04.701532 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701545 | orchestrator | 2026-04-16 07:55:04.701558 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 07:55:04.701571 | orchestrator | Thursday 16 April 2026 07:54:22 +0000 (0:00:01.124) 0:08:28.952 ******** 2026-04-16 07:55:04.701584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 07:55:04.701597 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 07:55:04.701610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 07:55:04.701622 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701635 | orchestrator | 2026-04-16 07:55:04.701648 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 07:55:04.701660 | orchestrator | Thursday 16 April 2026 07:54:23 +0000 (0:00:01.677) 0:08:30.630 ******** 2026-04-16 07:55:04.701678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 07:55:04.701699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 07:55:04.701718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 07:55:04.701864 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.701894 | orchestrator | 2026-04-16 07:55:04.701913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 07:55:04.701931 | orchestrator | Thursday 16 April 2026 07:54:25 +0000 (0:00:01.443) 0:08:32.074 ******** 2026-04-16 07:55:04.701948 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 07:55:04.701966 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 07:55:04.701983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 07:55:04.702001 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.702084 | orchestrator | 2026-04-16 07:55:04.702098 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 07:55:04.702109 | orchestrator | Thursday 16 April 2026 07:54:26 +0000 (0:00:01.385) 0:08:33.459 ******** 2026-04-16 07:55:04.702120 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.702130 | orchestrator | 2026-04-16 07:55:04.702141 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 07:55:04.702152 | orchestrator | Thursday 16 April 2026 07:54:27 +0000 (0:00:01.107) 0:08:34.567 ******** 2026-04-16 07:55:04.702165 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-16 07:55:04.702175 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.702186 | orchestrator | 2026-04-16 07:55:04.702197 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 07:55:04.702208 | orchestrator | Thursday 16 April 2026 07:54:29 +0000 (0:00:01.364) 0:08:35.931 ******** 2026-04-16 07:55:04.702219 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702230 | orchestrator | 2026-04-16 07:55:04.702240 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-16 07:55:04.702251 | orchestrator | Thursday 16 April 2026 07:54:30 +0000 (0:00:01.725) 0:08:37.657 ******** 2026-04-16 07:55:04.702262 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702273 | orchestrator | 2026-04-16 07:55:04.702284 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-16 07:55:04.702331 | orchestrator | Thursday 16 April 2026 07:54:32 +0000 (0:00:01.123) 0:08:38.781 ******** 2026-04-16 07:55:04.702343 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-04-16 07:55:04.702355 | orchestrator | 2026-04-16 07:55:04.702366 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-16 07:55:04.702433 | orchestrator | Thursday 16 April 2026 07:54:33 +0000 (0:00:01.495) 0:08:40.277 ******** 2026-04-16 07:55:04.702446 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-04-16 07:55:04.702457 | orchestrator | 2026-04-16 07:55:04.702468 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-16 07:55:04.702478 | orchestrator | Thursday 16 April 2026 07:54:37 +0000 (0:00:03.553) 0:08:43.831 ******** 2026-04-16 07:55:04.702489 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.702500 | orchestrator | 2026-04-16 07:55:04.702511 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-16 07:55:04.702522 | orchestrator | Thursday 16 April 2026 07:54:38 +0000 (0:00:01.133) 0:08:44.964 ******** 2026-04-16 07:55:04.702533 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702544 | orchestrator | 2026-04-16 07:55:04.702555 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-16 07:55:04.702566 | orchestrator | Thursday 16 April 2026 07:54:39 +0000 (0:00:01.136) 0:08:46.101 ******** 2026-04-16 07:55:04.702576 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702587 | orchestrator | 2026-04-16 07:55:04.702599 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-16 07:55:04.702619 | orchestrator | Thursday 16 April 2026 07:54:40 +0000 (0:00:01.192) 0:08:47.294 ******** 2026-04-16 07:55:04.702630 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:55:04.702641 | orchestrator | 2026-04-16 07:55:04.702652 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-16 07:55:04.702663 | orchestrator | Thursday 16 April 2026 07:54:42 +0000 (0:00:02.117) 0:08:49.411 ******** 2026-04-16 07:55:04.702674 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702685 | orchestrator | 2026-04-16 07:55:04.702696 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-16 07:55:04.702707 | orchestrator | Thursday 16 April 2026 07:54:44 +0000 (0:00:01.536) 0:08:50.947 ******** 2026-04-16 07:55:04.702718 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702729 | orchestrator | 2026-04-16 07:55:04.702740 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-16 07:55:04.702751 | orchestrator | Thursday 16 April 2026 07:54:45 +0000 (0:00:01.498) 0:08:52.446 ******** 2026-04-16 07:55:04.702762 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702773 | orchestrator | 2026-04-16 07:55:04.702784 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-16 07:55:04.702794 | orchestrator | Thursday 16 April 2026 07:54:47 +0000 (0:00:01.504) 0:08:53.951 ******** 2026-04-16 07:55:04.702805 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702816 | orchestrator | 2026-04-16 07:55:04.702827 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-16 07:55:04.702838 | orchestrator | Thursday 16 April 2026 07:54:48 +0000 (0:00:01.673) 0:08:55.624 ******** 2026-04-16 07:55:04.702849 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.702860 | orchestrator | 2026-04-16 07:55:04.702871 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-16 07:55:04.702882 | orchestrator | Thursday 16 April 2026 07:54:50 +0000 (0:00:01.670) 0:08:57.295 ******** 2026-04-16 07:55:04.702893 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 07:55:04.702904 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 07:55:04.702915 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 07:55:04.702926 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-04-16 07:55:04.702937 | orchestrator | 2026-04-16 07:55:04.702947 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-16 07:55:04.702965 | orchestrator | Thursday 16 April 2026 07:54:54 +0000 (0:00:03.769) 0:09:01.064 ******** 2026-04-16 07:55:04.702976 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:55:04.702987 | orchestrator | 2026-04-16 07:55:04.702998 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-16 07:55:04.703009 | orchestrator | Thursday 16 April 2026 07:54:56 +0000 (0:00:02.031) 0:09:03.096 ******** 2026-04-16 07:55:04.703020 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.703031 | orchestrator | 2026-04-16 07:55:04.703042 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-16 07:55:04.703053 | orchestrator | Thursday 16 April 2026 07:54:57 +0000 (0:00:01.111) 0:09:04.208 ******** 2026-04-16 07:55:04.703064 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.703075 | orchestrator | 2026-04-16 07:55:04.703086 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-16 07:55:04.703097 | orchestrator | Thursday 16 April 2026 07:54:58 +0000 (0:00:01.154) 0:09:05.363 ******** 2026-04-16 07:55:04.703108 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.703119 | orchestrator | 2026-04-16 07:55:04.703130 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-16 07:55:04.703141 | orchestrator | Thursday 16 April 2026 07:55:00 +0000 (0:00:02.019) 0:09:07.382 ******** 2026-04-16 07:55:04.703152 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:04.703163 | orchestrator | 2026-04-16 07:55:04.703174 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-16 07:55:04.703185 | orchestrator | Thursday 16 April 2026 07:55:02 +0000 (0:00:01.506) 0:09:08.889 ******** 2026-04-16 07:55:04.703195 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:04.703206 | orchestrator | 2026-04-16 07:55:04.703217 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-16 07:55:04.703228 | orchestrator | Thursday 16 April 2026 07:55:03 +0000 (0:00:01.096) 0:09:09.985 ******** 2026-04-16 07:55:04.703239 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-04-16 07:55:04.703250 | orchestrator | 2026-04-16 07:55:04.703261 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-16 07:55:04.703278 | orchestrator | Thursday 16 April 2026 07:55:04 +0000 (0:00:01.460) 0:09:11.446 ******** 2026-04-16 07:55:56.749153 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:56.749280 | orchestrator | 2026-04-16 07:55:56.749295 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-16 07:55:56.749306 | orchestrator | Thursday 16 April 2026 07:55:05 +0000 (0:00:01.096) 0:09:12.543 ******** 2026-04-16 07:55:56.749315 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:56.749324 | orchestrator | 2026-04-16 07:55:56.749332 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-16 07:55:56.749341 | orchestrator | Thursday 16 April 2026 07:55:06 +0000 (0:00:01.068) 0:09:13.611 ******** 2026-04-16 07:55:56.749349 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-04-16 07:55:56.749357 | orchestrator | 2026-04-16 07:55:56.749365 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-16 07:55:56.749374 | orchestrator | Thursday 16 April 2026 07:55:08 +0000 (0:00:01.432) 0:09:15.044 ******** 2026-04-16 07:55:56.749382 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:56.749391 | orchestrator | 2026-04-16 07:55:56.749399 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-16 07:55:56.749407 | orchestrator | Thursday 16 April 2026 07:55:10 +0000 (0:00:02.229) 0:09:17.273 ******** 2026-04-16 07:55:56.749415 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:56.749423 | orchestrator | 2026-04-16 07:55:56.749466 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-16 07:55:56.749488 | orchestrator | Thursday 16 April 2026 07:55:12 +0000 (0:00:01.957) 0:09:19.230 ******** 2026-04-16 07:55:56.749497 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:56.749506 | orchestrator | 2026-04-16 07:55:56.749532 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-16 07:55:56.749541 | orchestrator | Thursday 16 April 2026 07:55:14 +0000 (0:00:02.464) 0:09:21.695 ******** 2026-04-16 07:55:56.749550 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:55:56.749558 | orchestrator | 2026-04-16 07:55:56.749567 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-16 07:55:56.749576 | orchestrator | Thursday 16 April 2026 07:55:18 +0000 (0:00:03.285) 0:09:24.981 ******** 2026-04-16 07:55:56.749584 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-04-16 07:55:56.749594 | orchestrator | 2026-04-16 07:55:56.749603 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-16 07:55:56.749611 | orchestrator | Thursday 16 April 2026 07:55:19 +0000 (0:00:01.509) 0:09:26.491 ******** 2026-04-16 07:55:56.749620 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:56.749628 | orchestrator | 2026-04-16 07:55:56.749636 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-16 07:55:56.749645 | orchestrator | Thursday 16 April 2026 07:55:21 +0000 (0:00:02.214) 0:09:28.705 ******** 2026-04-16 07:55:56.749653 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:56.749662 | orchestrator | 2026-04-16 07:55:56.749670 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-16 07:55:56.749679 | orchestrator | Thursday 16 April 2026 07:55:25 +0000 (0:00:03.109) 0:09:31.815 ******** 2026-04-16 07:55:56.749688 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:56.749696 | orchestrator | 2026-04-16 07:55:56.749705 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-16 07:55:56.749715 | orchestrator | Thursday 16 April 2026 07:55:26 +0000 (0:00:01.147) 0:09:32.963 ******** 2026-04-16 07:55:56.749728 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-16 07:55:56.749740 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-16 07:55:56.749751 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-16 07:55:56.749760 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-16 07:55:56.749788 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-16 07:55:56.749800 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}])  2026-04-16 07:55:56.749820 | orchestrator | 2026-04-16 07:55:56.749830 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-16 07:55:56.749840 | orchestrator | Thursday 16 April 2026 07:55:36 +0000 (0:00:10.266) 0:09:43.230 ******** 2026-04-16 07:55:56.749850 | orchestrator | changed: [testbed-node-0] 2026-04-16 07:55:56.749859 | orchestrator | 2026-04-16 07:55:56.749869 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 07:55:56.749879 | orchestrator | Thursday 16 April 2026 07:55:39 +0000 (0:00:02.572) 0:09:45.802 ******** 2026-04-16 07:55:56.749889 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 07:55:56.749899 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 07:55:56.749913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 07:55:56.749923 | orchestrator | 2026-04-16 07:55:56.749933 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 07:55:56.749943 | orchestrator | Thursday 16 April 2026 07:55:41 +0000 (0:00:02.083) 0:09:47.885 ******** 2026-04-16 07:55:56.749953 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 07:55:56.749963 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 07:55:56.749971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 07:55:56.749979 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:56.749988 | orchestrator | 2026-04-16 07:55:56.749996 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-16 07:55:56.750005 | orchestrator | Thursday 16 April 2026 07:55:42 +0000 (0:00:01.355) 0:09:49.241 ******** 2026-04-16 07:55:56.750062 | orchestrator | skipping: [testbed-node-0] 2026-04-16 07:55:56.750072 | orchestrator | 2026-04-16 07:55:56.750081 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-16 07:55:56.750089 | orchestrator | Thursday 16 April 2026 07:55:43 +0000 (0:00:01.158) 0:09:50.400 ******** 2026-04-16 07:55:56.750098 | orchestrator | ok: [testbed-node-0] 2026-04-16 07:55:56.750106 | orchestrator | 2026-04-16 07:55:56.750115 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-16 07:55:56.750124 | orchestrator | 2026-04-16 07:55:56.750132 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-16 07:55:56.750140 | orchestrator | Thursday 16 April 2026 07:55:45 +0000 (0:00:02.183) 0:09:52.583 ******** 2026-04-16 07:55:56.750149 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750157 | orchestrator | 2026-04-16 07:55:56.750166 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-16 07:55:56.750174 | orchestrator | Thursday 16 April 2026 07:55:46 +0000 (0:00:01.109) 0:09:53.693 ******** 2026-04-16 07:55:56.750182 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750191 | orchestrator | 2026-04-16 07:55:56.750199 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-16 07:55:56.750207 | orchestrator | Thursday 16 April 2026 07:55:47 +0000 (0:00:00.778) 0:09:54.471 ******** 2026-04-16 07:55:56.750216 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:55:56.750224 | orchestrator | 2026-04-16 07:55:56.750233 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-16 07:55:56.750241 | orchestrator | Thursday 16 April 2026 07:55:48 +0000 (0:00:00.763) 0:09:55.235 ******** 2026-04-16 07:55:56.750250 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750258 | orchestrator | 2026-04-16 07:55:56.750266 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 07:55:56.750275 | orchestrator | Thursday 16 April 2026 07:55:49 +0000 (0:00:00.778) 0:09:56.014 ******** 2026-04-16 07:55:56.750283 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-16 07:55:56.750292 | orchestrator | 2026-04-16 07:55:56.750300 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 07:55:56.750316 | orchestrator | Thursday 16 April 2026 07:55:50 +0000 (0:00:01.120) 0:09:57.134 ******** 2026-04-16 07:55:56.750325 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750333 | orchestrator | 2026-04-16 07:55:56.750342 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 07:55:56.750350 | orchestrator | Thursday 16 April 2026 07:55:51 +0000 (0:00:01.444) 0:09:58.578 ******** 2026-04-16 07:55:56.750358 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750367 | orchestrator | 2026-04-16 07:55:56.750375 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 07:55:56.750384 | orchestrator | Thursday 16 April 2026 07:55:52 +0000 (0:00:01.095) 0:09:59.674 ******** 2026-04-16 07:55:56.750392 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750401 | orchestrator | 2026-04-16 07:55:56.750409 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 07:55:56.750418 | orchestrator | Thursday 16 April 2026 07:55:54 +0000 (0:00:01.464) 0:10:01.139 ******** 2026-04-16 07:55:56.750426 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750453 | orchestrator | 2026-04-16 07:55:56.750466 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 07:55:56.750478 | orchestrator | Thursday 16 April 2026 07:55:55 +0000 (0:00:01.137) 0:10:02.277 ******** 2026-04-16 07:55:56.750491 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:55:56.750504 | orchestrator | 2026-04-16 07:55:56.750516 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 07:55:56.750529 | orchestrator | Thursday 16 April 2026 07:55:56 +0000 (0:00:01.144) 0:10:03.421 ******** 2026-04-16 07:55:56.750551 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:20.070389 | orchestrator | 2026-04-16 07:56:20.070551 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 07:56:20.070570 | orchestrator | Thursday 16 April 2026 07:55:57 +0000 (0:00:01.115) 0:10:04.537 ******** 2026-04-16 07:56:20.070584 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.070595 | orchestrator | 2026-04-16 07:56:20.070607 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 07:56:20.070618 | orchestrator | Thursday 16 April 2026 07:55:58 +0000 (0:00:01.116) 0:10:05.654 ******** 2026-04-16 07:56:20.070629 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:20.070641 | orchestrator | 2026-04-16 07:56:20.070652 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 07:56:20.070663 | orchestrator | Thursday 16 April 2026 07:56:00 +0000 (0:00:01.104) 0:10:06.758 ******** 2026-04-16 07:56:20.070673 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 07:56:20.070684 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:56:20.070695 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:56:20.070706 | orchestrator | 2026-04-16 07:56:20.070717 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 07:56:20.070728 | orchestrator | Thursday 16 April 2026 07:56:01 +0000 (0:00:01.632) 0:10:08.391 ******** 2026-04-16 07:56:20.070739 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:20.070750 | orchestrator | 2026-04-16 07:56:20.070776 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 07:56:20.070787 | orchestrator | Thursday 16 April 2026 07:56:02 +0000 (0:00:01.218) 0:10:09.610 ******** 2026-04-16 07:56:20.070798 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 07:56:20.070809 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:56:20.070820 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:56:20.070831 | orchestrator | 2026-04-16 07:56:20.070842 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 07:56:20.070853 | orchestrator | Thursday 16 April 2026 07:56:05 +0000 (0:00:02.887) 0:10:12.497 ******** 2026-04-16 07:56:20.070885 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 07:56:20.070898 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 07:56:20.070908 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 07:56:20.070919 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.070932 | orchestrator | 2026-04-16 07:56:20.070945 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 07:56:20.070957 | orchestrator | Thursday 16 April 2026 07:56:07 +0000 (0:00:01.393) 0:10:13.891 ******** 2026-04-16 07:56:20.070971 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 07:56:20.070987 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 07:56:20.071000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 07:56:20.071012 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.071025 | orchestrator | 2026-04-16 07:56:20.071039 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 07:56:20.071051 | orchestrator | Thursday 16 April 2026 07:56:08 +0000 (0:00:01.574) 0:10:15.465 ******** 2026-04-16 07:56:20.071067 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:20.071083 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:20.071111 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:20.071124 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.071135 | orchestrator | 2026-04-16 07:56:20.071146 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 07:56:20.071156 | orchestrator | Thursday 16 April 2026 07:56:09 +0000 (0:00:01.144) 0:10:16.609 ******** 2026-04-16 07:56:20.071170 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 07:56:03.461171', 'end': '2026-04-16 07:56:03.505256', 'delta': '0:00:00.044085', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 07:56:20.071197 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'deb83ba22d33', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 07:56:04.024626', 'end': '2026-04-16 07:56:04.061063', 'delta': '0:00:00.036437', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['deb83ba22d33'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 07:56:20.071209 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '8eb997055eb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 07:56:04.552738', 'end': '2026-04-16 07:56:04.603679', 'delta': '0:00:00.050941', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8eb997055eb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 07:56:20.071221 | orchestrator | 2026-04-16 07:56:20.071232 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 07:56:20.071243 | orchestrator | Thursday 16 April 2026 07:56:11 +0000 (0:00:01.184) 0:10:17.794 ******** 2026-04-16 07:56:20.071254 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:20.071265 | orchestrator | 2026-04-16 07:56:20.071276 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 07:56:20.071286 | orchestrator | Thursday 16 April 2026 07:56:12 +0000 (0:00:01.228) 0:10:19.022 ******** 2026-04-16 07:56:20.071297 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.071308 | orchestrator | 2026-04-16 07:56:20.071319 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 07:56:20.071329 | orchestrator | Thursday 16 April 2026 07:56:13 +0000 (0:00:01.219) 0:10:20.241 ******** 2026-04-16 07:56:20.071340 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:20.071350 | orchestrator | 2026-04-16 07:56:20.071361 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 07:56:20.071372 | orchestrator | Thursday 16 April 2026 07:56:14 +0000 (0:00:01.131) 0:10:21.373 ******** 2026-04-16 07:56:20.071383 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:56:20.071394 | orchestrator | 2026-04-16 07:56:20.071404 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 07:56:20.071415 | orchestrator | Thursday 16 April 2026 07:56:16 +0000 (0:00:01.938) 0:10:23.312 ******** 2026-04-16 07:56:20.071426 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:20.071437 | orchestrator | 2026-04-16 07:56:20.071447 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 07:56:20.071479 | orchestrator | Thursday 16 April 2026 07:56:17 +0000 (0:00:01.148) 0:10:24.461 ******** 2026-04-16 07:56:20.071490 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.071501 | orchestrator | 2026-04-16 07:56:20.071512 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 07:56:20.071523 | orchestrator | Thursday 16 April 2026 07:56:18 +0000 (0:00:01.173) 0:10:25.634 ******** 2026-04-16 07:56:20.071533 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:20.071544 | orchestrator | 2026-04-16 07:56:20.071555 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 07:56:20.071579 | orchestrator | Thursday 16 April 2026 07:56:20 +0000 (0:00:01.183) 0:10:26.817 ******** 2026-04-16 07:56:29.263830 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.263944 | orchestrator | 2026-04-16 07:56:29.263962 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 07:56:29.263975 | orchestrator | Thursday 16 April 2026 07:56:21 +0000 (0:00:01.110) 0:10:27.927 ******** 2026-04-16 07:56:29.263986 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.263997 | orchestrator | 2026-04-16 07:56:29.264008 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 07:56:29.264019 | orchestrator | Thursday 16 April 2026 07:56:22 +0000 (0:00:01.113) 0:10:29.041 ******** 2026-04-16 07:56:29.264030 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.264041 | orchestrator | 2026-04-16 07:56:29.264052 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 07:56:29.264062 | orchestrator | Thursday 16 April 2026 07:56:23 +0000 (0:00:01.124) 0:10:30.166 ******** 2026-04-16 07:56:29.264073 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.264084 | orchestrator | 2026-04-16 07:56:29.264095 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 07:56:29.264106 | orchestrator | Thursday 16 April 2026 07:56:24 +0000 (0:00:01.140) 0:10:31.306 ******** 2026-04-16 07:56:29.264116 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.264127 | orchestrator | 2026-04-16 07:56:29.264138 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 07:56:29.264163 | orchestrator | Thursday 16 April 2026 07:56:25 +0000 (0:00:01.147) 0:10:32.454 ******** 2026-04-16 07:56:29.264174 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.264185 | orchestrator | 2026-04-16 07:56:29.264196 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 07:56:29.264208 | orchestrator | Thursday 16 April 2026 07:56:26 +0000 (0:00:01.132) 0:10:33.586 ******** 2026-04-16 07:56:29.264218 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.264229 | orchestrator | 2026-04-16 07:56:29.264241 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 07:56:29.264252 | orchestrator | Thursday 16 April 2026 07:56:27 +0000 (0:00:01.112) 0:10:34.699 ******** 2026-04-16 07:56:29.264265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 07:56:29.264340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b3387fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 07:56:29.264427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 07:56:29.264491 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:29.264505 | orchestrator | 2026-04-16 07:56:29.264519 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 07:56:29.264532 | orchestrator | Thursday 16 April 2026 07:56:29 +0000 (0:00:01.254) 0:10:35.953 ******** 2026-04-16 07:56:29.264553 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505262 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505365 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505376 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505384 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505407 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505414 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505442 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b3387fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505450 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505461 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 07:56:34.505797 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:56:34.505809 | orchestrator | 2026-04-16 07:56:34.505817 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 07:56:34.505825 | orchestrator | Thursday 16 April 2026 07:56:30 +0000 (0:00:01.253) 0:10:37.207 ******** 2026-04-16 07:56:34.505832 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:34.505839 | orchestrator | 2026-04-16 07:56:34.505847 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 07:56:34.505854 | orchestrator | Thursday 16 April 2026 07:56:31 +0000 (0:00:01.479) 0:10:38.687 ******** 2026-04-16 07:56:34.505860 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:34.505867 | orchestrator | 2026-04-16 07:56:34.505874 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 07:56:34.505880 | orchestrator | Thursday 16 April 2026 07:56:33 +0000 (0:00:01.105) 0:10:39.793 ******** 2026-04-16 07:56:34.505886 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:56:34.505893 | orchestrator | 2026-04-16 07:56:34.505900 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 07:56:34.505914 | orchestrator | Thursday 16 April 2026 07:56:34 +0000 (0:00:01.464) 0:10:41.257 ******** 2026-04-16 07:57:14.437765 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.437875 | orchestrator | 2026-04-16 07:57:14.437891 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 07:57:14.437903 | orchestrator | Thursday 16 April 2026 07:56:35 +0000 (0:00:01.121) 0:10:42.379 ******** 2026-04-16 07:57:14.437913 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.437923 | orchestrator | 2026-04-16 07:57:14.437934 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 07:57:14.437944 | orchestrator | Thursday 16 April 2026 07:56:36 +0000 (0:00:01.234) 0:10:43.613 ******** 2026-04-16 07:57:14.437953 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.437963 | orchestrator | 2026-04-16 07:57:14.437974 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 07:57:14.437984 | orchestrator | Thursday 16 April 2026 07:56:37 +0000 (0:00:01.115) 0:10:44.729 ******** 2026-04-16 07:57:14.437994 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-16 07:57:14.438005 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:57:14.438014 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-16 07:57:14.438078 | orchestrator | 2026-04-16 07:57:14.438102 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 07:57:14.438113 | orchestrator | Thursday 16 April 2026 07:56:39 +0000 (0:00:01.678) 0:10:46.407 ******** 2026-04-16 07:57:14.438123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 07:57:14.438133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 07:57:14.438143 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 07:57:14.438153 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438163 | orchestrator | 2026-04-16 07:57:14.438193 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 07:57:14.438203 | orchestrator | Thursday 16 April 2026 07:56:40 +0000 (0:00:01.209) 0:10:47.617 ******** 2026-04-16 07:57:14.438213 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438223 | orchestrator | 2026-04-16 07:57:14.438232 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 07:57:14.438242 | orchestrator | Thursday 16 April 2026 07:56:41 +0000 (0:00:01.096) 0:10:48.714 ******** 2026-04-16 07:57:14.438252 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 07:57:14.438262 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:57:14.438272 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:57:14.438282 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:57:14.438292 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:57:14.438301 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:57:14.438311 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:57:14.438321 | orchestrator | 2026-04-16 07:57:14.438331 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 07:57:14.438340 | orchestrator | Thursday 16 April 2026 07:56:44 +0000 (0:00:02.102) 0:10:50.816 ******** 2026-04-16 07:57:14.438350 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 07:57:14.438360 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:57:14.438369 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 07:57:14.438380 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 07:57:14.438390 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 07:57:14.438399 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 07:57:14.438409 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 07:57:14.438419 | orchestrator | 2026-04-16 07:57:14.438428 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-16 07:57:14.438438 | orchestrator | Thursday 16 April 2026 07:56:46 +0000 (0:00:02.149) 0:10:52.966 ******** 2026-04-16 07:57:14.438447 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438457 | orchestrator | 2026-04-16 07:57:14.438466 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-16 07:57:14.438476 | orchestrator | Thursday 16 April 2026 07:56:47 +0000 (0:00:00.849) 0:10:53.816 ******** 2026-04-16 07:57:14.438486 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438495 | orchestrator | 2026-04-16 07:57:14.438527 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-16 07:57:14.438537 | orchestrator | Thursday 16 April 2026 07:56:47 +0000 (0:00:00.838) 0:10:54.654 ******** 2026-04-16 07:57:14.438547 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438557 | orchestrator | 2026-04-16 07:57:14.438566 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-16 07:57:14.438576 | orchestrator | Thursday 16 April 2026 07:56:48 +0000 (0:00:00.764) 0:10:55.419 ******** 2026-04-16 07:57:14.438586 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438595 | orchestrator | 2026-04-16 07:57:14.438605 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-16 07:57:14.438615 | orchestrator | Thursday 16 April 2026 07:56:49 +0000 (0:00:01.151) 0:10:56.570 ******** 2026-04-16 07:57:14.438624 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438634 | orchestrator | 2026-04-16 07:57:14.438644 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-16 07:57:14.438661 | orchestrator | Thursday 16 April 2026 07:56:50 +0000 (0:00:00.785) 0:10:57.356 ******** 2026-04-16 07:57:14.438689 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 07:57:14.438700 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 07:57:14.438710 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 07:57:14.438719 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438729 | orchestrator | 2026-04-16 07:57:14.438739 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-16 07:57:14.438748 | orchestrator | Thursday 16 April 2026 07:56:51 +0000 (0:00:01.018) 0:10:58.375 ******** 2026-04-16 07:57:14.438758 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-16 07:57:14.438768 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-16 07:57:14.438777 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-16 07:57:14.438787 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-16 07:57:14.438797 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-16 07:57:14.438811 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-16 07:57:14.438821 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.438831 | orchestrator | 2026-04-16 07:57:14.438840 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-16 07:57:14.438850 | orchestrator | Thursday 16 April 2026 07:56:52 +0000 (0:00:01.275) 0:10:59.650 ******** 2026-04-16 07:57:14.438860 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:57:14.438869 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 07:57:14.438879 | orchestrator | 2026-04-16 07:57:14.438889 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-16 07:57:14.438898 | orchestrator | Thursday 16 April 2026 07:56:57 +0000 (0:00:04.468) 0:11:04.118 ******** 2026-04-16 07:57:14.438908 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:57:14.438918 | orchestrator | 2026-04-16 07:57:14.438928 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 07:57:14.438937 | orchestrator | Thursday 16 April 2026 07:56:59 +0000 (0:00:02.157) 0:11:06.276 ******** 2026-04-16 07:57:14.438947 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-16 07:57:14.438957 | orchestrator | 2026-04-16 07:57:14.438967 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 07:57:14.438977 | orchestrator | Thursday 16 April 2026 07:57:00 +0000 (0:00:01.126) 0:11:07.402 ******** 2026-04-16 07:57:14.438986 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-16 07:57:14.438996 | orchestrator | 2026-04-16 07:57:14.439005 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 07:57:14.439015 | orchestrator | Thursday 16 April 2026 07:57:01 +0000 (0:00:01.101) 0:11:08.504 ******** 2026-04-16 07:57:14.439025 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:14.439035 | orchestrator | 2026-04-16 07:57:14.439045 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 07:57:14.439054 | orchestrator | Thursday 16 April 2026 07:57:03 +0000 (0:00:01.556) 0:11:10.061 ******** 2026-04-16 07:57:14.439064 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.439073 | orchestrator | 2026-04-16 07:57:14.439083 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 07:57:14.439093 | orchestrator | Thursday 16 April 2026 07:57:04 +0000 (0:00:01.130) 0:11:11.192 ******** 2026-04-16 07:57:14.439103 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.439112 | orchestrator | 2026-04-16 07:57:14.439122 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 07:57:14.439132 | orchestrator | Thursday 16 April 2026 07:57:05 +0000 (0:00:01.122) 0:11:12.315 ******** 2026-04-16 07:57:14.439147 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.439158 | orchestrator | 2026-04-16 07:57:14.439167 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 07:57:14.439177 | orchestrator | Thursday 16 April 2026 07:57:06 +0000 (0:00:01.130) 0:11:13.445 ******** 2026-04-16 07:57:14.439187 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:14.439196 | orchestrator | 2026-04-16 07:57:14.439206 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 07:57:14.439215 | orchestrator | Thursday 16 April 2026 07:57:08 +0000 (0:00:01.547) 0:11:14.993 ******** 2026-04-16 07:57:14.439225 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.439235 | orchestrator | 2026-04-16 07:57:14.439244 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 07:57:14.439254 | orchestrator | Thursday 16 April 2026 07:57:09 +0000 (0:00:01.133) 0:11:16.127 ******** 2026-04-16 07:57:14.439264 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.439274 | orchestrator | 2026-04-16 07:57:14.439283 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 07:57:14.439293 | orchestrator | Thursday 16 April 2026 07:57:10 +0000 (0:00:01.156) 0:11:17.284 ******** 2026-04-16 07:57:14.439303 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:14.439313 | orchestrator | 2026-04-16 07:57:14.439322 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 07:57:14.439338 | orchestrator | Thursday 16 April 2026 07:57:12 +0000 (0:00:01.551) 0:11:18.835 ******** 2026-04-16 07:57:14.439354 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:14.439370 | orchestrator | 2026-04-16 07:57:14.439385 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 07:57:14.439403 | orchestrator | Thursday 16 April 2026 07:57:13 +0000 (0:00:01.562) 0:11:20.397 ******** 2026-04-16 07:57:14.439416 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:14.439430 | orchestrator | 2026-04-16 07:57:14.439445 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 07:57:14.439460 | orchestrator | Thursday 16 April 2026 07:57:14 +0000 (0:00:00.734) 0:11:21.132 ******** 2026-04-16 07:57:14.439484 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.213214 | orchestrator | 2026-04-16 07:57:52.213339 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 07:57:52.213358 | orchestrator | Thursday 16 April 2026 07:57:15 +0000 (0:00:00.805) 0:11:21.937 ******** 2026-04-16 07:57:52.213372 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.213387 | orchestrator | 2026-04-16 07:57:52.213401 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 07:57:52.213414 | orchestrator | Thursday 16 April 2026 07:57:15 +0000 (0:00:00.761) 0:11:22.699 ******** 2026-04-16 07:57:52.213429 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.213441 | orchestrator | 2026-04-16 07:57:52.213454 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 07:57:52.213467 | orchestrator | Thursday 16 April 2026 07:57:16 +0000 (0:00:00.754) 0:11:23.454 ******** 2026-04-16 07:57:52.213479 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.213492 | orchestrator | 2026-04-16 07:57:52.213505 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 07:57:52.213518 | orchestrator | Thursday 16 April 2026 07:57:17 +0000 (0:00:00.771) 0:11:24.226 ******** 2026-04-16 07:57:52.213634 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.213656 | orchestrator | 2026-04-16 07:57:52.213672 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 07:57:52.213688 | orchestrator | Thursday 16 April 2026 07:57:18 +0000 (0:00:00.775) 0:11:25.001 ******** 2026-04-16 07:57:52.213703 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.213718 | orchestrator | 2026-04-16 07:57:52.213734 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 07:57:52.213751 | orchestrator | Thursday 16 April 2026 07:57:18 +0000 (0:00:00.744) 0:11:25.746 ******** 2026-04-16 07:57:52.213794 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.213812 | orchestrator | 2026-04-16 07:57:52.213829 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 07:57:52.213844 | orchestrator | Thursday 16 April 2026 07:57:19 +0000 (0:00:00.766) 0:11:26.513 ******** 2026-04-16 07:57:52.213859 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.213875 | orchestrator | 2026-04-16 07:57:52.213890 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 07:57:52.213904 | orchestrator | Thursday 16 April 2026 07:57:20 +0000 (0:00:00.786) 0:11:27.299 ******** 2026-04-16 07:57:52.213919 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.213933 | orchestrator | 2026-04-16 07:57:52.213948 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 07:57:52.213964 | orchestrator | Thursday 16 April 2026 07:57:21 +0000 (0:00:00.790) 0:11:28.090 ******** 2026-04-16 07:57:52.213980 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.213995 | orchestrator | 2026-04-16 07:57:52.214010 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 07:57:52.214100 | orchestrator | Thursday 16 April 2026 07:57:22 +0000 (0:00:00.751) 0:11:28.841 ******** 2026-04-16 07:57:52.214115 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214129 | orchestrator | 2026-04-16 07:57:52.214143 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 07:57:52.214156 | orchestrator | Thursday 16 April 2026 07:57:22 +0000 (0:00:00.740) 0:11:29.582 ******** 2026-04-16 07:57:52.214169 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214180 | orchestrator | 2026-04-16 07:57:52.214192 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 07:57:52.214205 | orchestrator | Thursday 16 April 2026 07:57:23 +0000 (0:00:00.756) 0:11:30.338 ******** 2026-04-16 07:57:52.214218 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214231 | orchestrator | 2026-04-16 07:57:52.214244 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 07:57:52.214257 | orchestrator | Thursday 16 April 2026 07:57:24 +0000 (0:00:00.754) 0:11:31.093 ******** 2026-04-16 07:57:52.214270 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214283 | orchestrator | 2026-04-16 07:57:52.214296 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 07:57:52.214310 | orchestrator | Thursday 16 April 2026 07:57:25 +0000 (0:00:00.764) 0:11:31.857 ******** 2026-04-16 07:57:52.214324 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214337 | orchestrator | 2026-04-16 07:57:52.214350 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 07:57:52.214360 | orchestrator | Thursday 16 April 2026 07:57:25 +0000 (0:00:00.769) 0:11:32.627 ******** 2026-04-16 07:57:52.214368 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214375 | orchestrator | 2026-04-16 07:57:52.214383 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 07:57:52.214392 | orchestrator | Thursday 16 April 2026 07:57:26 +0000 (0:00:00.761) 0:11:33.389 ******** 2026-04-16 07:57:52.214399 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214407 | orchestrator | 2026-04-16 07:57:52.214415 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 07:57:52.214424 | orchestrator | Thursday 16 April 2026 07:57:27 +0000 (0:00:00.760) 0:11:34.150 ******** 2026-04-16 07:57:52.214432 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214439 | orchestrator | 2026-04-16 07:57:52.214447 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 07:57:52.214455 | orchestrator | Thursday 16 April 2026 07:57:28 +0000 (0:00:00.764) 0:11:34.915 ******** 2026-04-16 07:57:52.214463 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214470 | orchestrator | 2026-04-16 07:57:52.214478 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 07:57:52.214499 | orchestrator | Thursday 16 April 2026 07:57:28 +0000 (0:00:00.741) 0:11:35.656 ******** 2026-04-16 07:57:52.214507 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214515 | orchestrator | 2026-04-16 07:57:52.214522 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 07:57:52.214530 | orchestrator | Thursday 16 April 2026 07:57:29 +0000 (0:00:00.759) 0:11:36.416 ******** 2026-04-16 07:57:52.214538 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214576 | orchestrator | 2026-04-16 07:57:52.214606 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 07:57:52.214615 | orchestrator | Thursday 16 April 2026 07:57:30 +0000 (0:00:00.772) 0:11:37.189 ******** 2026-04-16 07:57:52.214623 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.214631 | orchestrator | 2026-04-16 07:57:52.214639 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 07:57:52.214647 | orchestrator | Thursday 16 April 2026 07:57:32 +0000 (0:00:01.641) 0:11:38.830 ******** 2026-04-16 07:57:52.214655 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.214663 | orchestrator | 2026-04-16 07:57:52.214671 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 07:57:52.214678 | orchestrator | Thursday 16 April 2026 07:57:34 +0000 (0:00:02.112) 0:11:40.943 ******** 2026-04-16 07:57:52.214687 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-16 07:57:52.214696 | orchestrator | 2026-04-16 07:57:52.214704 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 07:57:52.214712 | orchestrator | Thursday 16 April 2026 07:57:35 +0000 (0:00:01.130) 0:11:42.074 ******** 2026-04-16 07:57:52.214720 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214727 | orchestrator | 2026-04-16 07:57:52.214744 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 07:57:52.214752 | orchestrator | Thursday 16 April 2026 07:57:36 +0000 (0:00:01.103) 0:11:43.178 ******** 2026-04-16 07:57:52.214760 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214768 | orchestrator | 2026-04-16 07:57:52.214776 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 07:57:52.214784 | orchestrator | Thursday 16 April 2026 07:57:37 +0000 (0:00:01.098) 0:11:44.277 ******** 2026-04-16 07:57:52.214791 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 07:57:52.214799 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 07:57:52.214807 | orchestrator | 2026-04-16 07:57:52.214815 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 07:57:52.214823 | orchestrator | Thursday 16 April 2026 07:57:39 +0000 (0:00:01.816) 0:11:46.093 ******** 2026-04-16 07:57:52.214831 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.214839 | orchestrator | 2026-04-16 07:57:52.214847 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 07:57:52.214854 | orchestrator | Thursday 16 April 2026 07:57:40 +0000 (0:00:01.471) 0:11:47.564 ******** 2026-04-16 07:57:52.214862 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214870 | orchestrator | 2026-04-16 07:57:52.214878 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 07:57:52.214886 | orchestrator | Thursday 16 April 2026 07:57:41 +0000 (0:00:01.115) 0:11:48.679 ******** 2026-04-16 07:57:52.214894 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214901 | orchestrator | 2026-04-16 07:57:52.214909 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 07:57:52.214917 | orchestrator | Thursday 16 April 2026 07:57:42 +0000 (0:00:00.772) 0:11:49.452 ******** 2026-04-16 07:57:52.214925 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.214933 | orchestrator | 2026-04-16 07:57:52.214941 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 07:57:52.214949 | orchestrator | Thursday 16 April 2026 07:57:43 +0000 (0:00:00.753) 0:11:50.206 ******** 2026-04-16 07:57:52.214965 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-16 07:57:52.214973 | orchestrator | 2026-04-16 07:57:52.214981 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 07:57:52.214989 | orchestrator | Thursday 16 April 2026 07:57:44 +0000 (0:00:01.143) 0:11:51.349 ******** 2026-04-16 07:57:52.214996 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:57:52.215004 | orchestrator | 2026-04-16 07:57:52.215012 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 07:57:52.215021 | orchestrator | Thursday 16 April 2026 07:57:46 +0000 (0:00:01.756) 0:11:53.106 ******** 2026-04-16 07:57:52.215029 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 07:57:52.215037 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 07:57:52.215045 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 07:57:52.215053 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.215060 | orchestrator | 2026-04-16 07:57:52.215068 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 07:57:52.215076 | orchestrator | Thursday 16 April 2026 07:57:47 +0000 (0:00:01.152) 0:11:54.259 ******** 2026-04-16 07:57:52.215084 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.215092 | orchestrator | 2026-04-16 07:57:52.215100 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 07:57:52.215108 | orchestrator | Thursday 16 April 2026 07:57:48 +0000 (0:00:01.113) 0:11:55.372 ******** 2026-04-16 07:57:52.215116 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.215124 | orchestrator | 2026-04-16 07:57:52.215132 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 07:57:52.215139 | orchestrator | Thursday 16 April 2026 07:57:49 +0000 (0:00:01.166) 0:11:56.539 ******** 2026-04-16 07:57:52.215147 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.215155 | orchestrator | 2026-04-16 07:57:52.215163 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 07:57:52.215171 | orchestrator | Thursday 16 April 2026 07:57:50 +0000 (0:00:01.146) 0:11:57.685 ******** 2026-04-16 07:57:52.215179 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.215187 | orchestrator | 2026-04-16 07:57:52.215195 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 07:57:52.215203 | orchestrator | Thursday 16 April 2026 07:57:52 +0000 (0:00:01.120) 0:11:58.806 ******** 2026-04-16 07:57:52.215211 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:57:52.215219 | orchestrator | 2026-04-16 07:57:52.215232 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 07:58:30.117561 | orchestrator | Thursday 16 April 2026 07:57:52 +0000 (0:00:00.785) 0:11:59.592 ******** 2026-04-16 07:58:30.117668 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:58:30.117676 | orchestrator | 2026-04-16 07:58:30.117682 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 07:58:30.117687 | orchestrator | Thursday 16 April 2026 07:57:55 +0000 (0:00:02.249) 0:12:01.841 ******** 2026-04-16 07:58:30.117691 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:58:30.117695 | orchestrator | 2026-04-16 07:58:30.117699 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 07:58:30.117703 | orchestrator | Thursday 16 April 2026 07:57:55 +0000 (0:00:00.776) 0:12:02.618 ******** 2026-04-16 07:58:30.117707 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-16 07:58:30.117711 | orchestrator | 2026-04-16 07:58:30.117716 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 07:58:30.117719 | orchestrator | Thursday 16 April 2026 07:57:57 +0000 (0:00:01.159) 0:12:03.778 ******** 2026-04-16 07:58:30.117724 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117728 | orchestrator | 2026-04-16 07:58:30.117745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 07:58:30.117764 | orchestrator | Thursday 16 April 2026 07:57:58 +0000 (0:00:01.133) 0:12:04.912 ******** 2026-04-16 07:58:30.117768 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117772 | orchestrator | 2026-04-16 07:58:30.117776 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 07:58:30.117779 | orchestrator | Thursday 16 April 2026 07:57:59 +0000 (0:00:01.149) 0:12:06.061 ******** 2026-04-16 07:58:30.117783 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117787 | orchestrator | 2026-04-16 07:58:30.117791 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 07:58:30.117794 | orchestrator | Thursday 16 April 2026 07:58:00 +0000 (0:00:01.112) 0:12:07.174 ******** 2026-04-16 07:58:30.117798 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117802 | orchestrator | 2026-04-16 07:58:30.117806 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 07:58:30.117809 | orchestrator | Thursday 16 April 2026 07:58:01 +0000 (0:00:01.119) 0:12:08.293 ******** 2026-04-16 07:58:30.117813 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117817 | orchestrator | 2026-04-16 07:58:30.117820 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 07:58:30.117824 | orchestrator | Thursday 16 April 2026 07:58:02 +0000 (0:00:01.162) 0:12:09.456 ******** 2026-04-16 07:58:30.117828 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117831 | orchestrator | 2026-04-16 07:58:30.117835 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 07:58:30.117839 | orchestrator | Thursday 16 April 2026 07:58:03 +0000 (0:00:01.126) 0:12:10.583 ******** 2026-04-16 07:58:30.117842 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117846 | orchestrator | 2026-04-16 07:58:30.117850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 07:58:30.117854 | orchestrator | Thursday 16 April 2026 07:58:04 +0000 (0:00:01.147) 0:12:11.731 ******** 2026-04-16 07:58:30.117857 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117861 | orchestrator | 2026-04-16 07:58:30.117865 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 07:58:30.117869 | orchestrator | Thursday 16 April 2026 07:58:06 +0000 (0:00:01.182) 0:12:12.913 ******** 2026-04-16 07:58:30.117873 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:58:30.117877 | orchestrator | 2026-04-16 07:58:30.117883 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 07:58:30.117889 | orchestrator | Thursday 16 April 2026 07:58:06 +0000 (0:00:00.787) 0:12:13.700 ******** 2026-04-16 07:58:30.117896 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-16 07:58:30.117901 | orchestrator | 2026-04-16 07:58:30.117905 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 07:58:30.117909 | orchestrator | Thursday 16 April 2026 07:58:08 +0000 (0:00:01.164) 0:12:14.865 ******** 2026-04-16 07:58:30.117912 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-16 07:58:30.117916 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-16 07:58:30.117920 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-16 07:58:30.117924 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-16 07:58:30.117928 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-16 07:58:30.117931 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-16 07:58:30.117935 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-16 07:58:30.117939 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-16 07:58:30.117943 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 07:58:30.117947 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 07:58:30.117951 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 07:58:30.117959 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 07:58:30.117963 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 07:58:30.117966 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 07:58:30.117970 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-16 07:58:30.117974 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-16 07:58:30.117978 | orchestrator | 2026-04-16 07:58:30.117981 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 07:58:30.117985 | orchestrator | Thursday 16 April 2026 07:58:14 +0000 (0:00:06.288) 0:12:21.154 ******** 2026-04-16 07:58:30.117989 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.117993 | orchestrator | 2026-04-16 07:58:30.117997 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 07:58:30.118012 | orchestrator | Thursday 16 April 2026 07:58:15 +0000 (0:00:00.777) 0:12:21.932 ******** 2026-04-16 07:58:30.118054 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118062 | orchestrator | 2026-04-16 07:58:30.118067 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 07:58:30.118079 | orchestrator | Thursday 16 April 2026 07:58:15 +0000 (0:00:00.759) 0:12:22.691 ******** 2026-04-16 07:58:30.118086 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118092 | orchestrator | 2026-04-16 07:58:30.118098 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 07:58:30.118103 | orchestrator | Thursday 16 April 2026 07:58:16 +0000 (0:00:00.759) 0:12:23.451 ******** 2026-04-16 07:58:30.118108 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118112 | orchestrator | 2026-04-16 07:58:30.118117 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 07:58:30.118121 | orchestrator | Thursday 16 April 2026 07:58:17 +0000 (0:00:00.779) 0:12:24.230 ******** 2026-04-16 07:58:30.118125 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118130 | orchestrator | 2026-04-16 07:58:30.118138 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 07:58:30.118142 | orchestrator | Thursday 16 April 2026 07:58:18 +0000 (0:00:00.766) 0:12:24.997 ******** 2026-04-16 07:58:30.118147 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118151 | orchestrator | 2026-04-16 07:58:30.118158 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 07:58:30.118164 | orchestrator | Thursday 16 April 2026 07:58:18 +0000 (0:00:00.743) 0:12:25.741 ******** 2026-04-16 07:58:30.118170 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118176 | orchestrator | 2026-04-16 07:58:30.118182 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 07:58:30.118188 | orchestrator | Thursday 16 April 2026 07:58:19 +0000 (0:00:00.749) 0:12:26.491 ******** 2026-04-16 07:58:30.118194 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118200 | orchestrator | 2026-04-16 07:58:30.118206 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 07:58:30.118211 | orchestrator | Thursday 16 April 2026 07:58:20 +0000 (0:00:00.795) 0:12:27.286 ******** 2026-04-16 07:58:30.118217 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118223 | orchestrator | 2026-04-16 07:58:30.118230 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 07:58:30.118237 | orchestrator | Thursday 16 April 2026 07:58:21 +0000 (0:00:00.764) 0:12:28.051 ******** 2026-04-16 07:58:30.118243 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118249 | orchestrator | 2026-04-16 07:58:30.118255 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 07:58:30.118262 | orchestrator | Thursday 16 April 2026 07:58:22 +0000 (0:00:00.760) 0:12:28.811 ******** 2026-04-16 07:58:30.118268 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118274 | orchestrator | 2026-04-16 07:58:30.118287 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 07:58:30.118292 | orchestrator | Thursday 16 April 2026 07:58:22 +0000 (0:00:00.775) 0:12:29.587 ******** 2026-04-16 07:58:30.118296 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118300 | orchestrator | 2026-04-16 07:58:30.118305 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 07:58:30.118309 | orchestrator | Thursday 16 April 2026 07:58:23 +0000 (0:00:00.759) 0:12:30.346 ******** 2026-04-16 07:58:30.118313 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118317 | orchestrator | 2026-04-16 07:58:30.118322 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 07:58:30.118326 | orchestrator | Thursday 16 April 2026 07:58:24 +0000 (0:00:00.849) 0:12:31.196 ******** 2026-04-16 07:58:30.118330 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118335 | orchestrator | 2026-04-16 07:58:30.118339 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 07:58:30.118343 | orchestrator | Thursday 16 April 2026 07:58:25 +0000 (0:00:00.788) 0:12:31.984 ******** 2026-04-16 07:58:30.118348 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118352 | orchestrator | 2026-04-16 07:58:30.118356 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 07:58:30.118360 | orchestrator | Thursday 16 April 2026 07:58:26 +0000 (0:00:00.897) 0:12:32.881 ******** 2026-04-16 07:58:30.118365 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118369 | orchestrator | 2026-04-16 07:58:30.118373 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 07:58:30.118377 | orchestrator | Thursday 16 April 2026 07:58:26 +0000 (0:00:00.762) 0:12:33.644 ******** 2026-04-16 07:58:30.118382 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118386 | orchestrator | 2026-04-16 07:58:30.118391 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 07:58:30.118396 | orchestrator | Thursday 16 April 2026 07:58:27 +0000 (0:00:00.752) 0:12:34.397 ******** 2026-04-16 07:58:30.118401 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118405 | orchestrator | 2026-04-16 07:58:30.118409 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 07:58:30.118413 | orchestrator | Thursday 16 April 2026 07:58:28 +0000 (0:00:00.796) 0:12:35.193 ******** 2026-04-16 07:58:30.118418 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118422 | orchestrator | 2026-04-16 07:58:30.118426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 07:58:30.118430 | orchestrator | Thursday 16 April 2026 07:58:29 +0000 (0:00:00.760) 0:12:35.954 ******** 2026-04-16 07:58:30.118435 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118439 | orchestrator | 2026-04-16 07:58:30.118443 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 07:58:30.118447 | orchestrator | Thursday 16 April 2026 07:58:29 +0000 (0:00:00.762) 0:12:36.716 ******** 2026-04-16 07:58:30.118452 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:58:30.118456 | orchestrator | 2026-04-16 07:58:30.118467 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 07:59:45.749567 | orchestrator | Thursday 16 April 2026 07:58:30 +0000 (0:00:00.780) 0:12:37.496 ******** 2026-04-16 07:59:45.749774 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 07:59:45.749795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 07:59:45.749807 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 07:59:45.749818 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.749831 | orchestrator | 2026-04-16 07:59:45.749858 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 07:59:45.749880 | orchestrator | Thursday 16 April 2026 07:58:31 +0000 (0:00:01.009) 0:12:38.506 ******** 2026-04-16 07:59:45.749892 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 07:59:45.749929 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 07:59:45.749941 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 07:59:45.749952 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.749977 | orchestrator | 2026-04-16 07:59:45.749989 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 07:59:45.749999 | orchestrator | Thursday 16 April 2026 07:58:32 +0000 (0:00:01.019) 0:12:39.526 ******** 2026-04-16 07:59:45.750010 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 07:59:45.750082 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 07:59:45.750093 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 07:59:45.750104 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.750118 | orchestrator | 2026-04-16 07:59:45.750130 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 07:59:45.750143 | orchestrator | Thursday 16 April 2026 07:58:33 +0000 (0:00:01.083) 0:12:40.610 ******** 2026-04-16 07:59:45.750156 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.750168 | orchestrator | 2026-04-16 07:59:45.750181 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 07:59:45.750193 | orchestrator | Thursday 16 April 2026 07:58:34 +0000 (0:00:00.795) 0:12:41.405 ******** 2026-04-16 07:59:45.750207 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-16 07:59:45.750220 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.750232 | orchestrator | 2026-04-16 07:59:45.750245 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 07:59:45.750257 | orchestrator | Thursday 16 April 2026 07:58:35 +0000 (0:00:01.027) 0:12:42.433 ******** 2026-04-16 07:59:45.750269 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750282 | orchestrator | 2026-04-16 07:59:45.750294 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-16 07:59:45.750307 | orchestrator | Thursday 16 April 2026 07:58:37 +0000 (0:00:01.423) 0:12:43.857 ******** 2026-04-16 07:59:45.750319 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750331 | orchestrator | 2026-04-16 07:59:45.750343 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-16 07:59:45.750356 | orchestrator | Thursday 16 April 2026 07:58:37 +0000 (0:00:00.784) 0:12:44.642 ******** 2026-04-16 07:59:45.750369 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-04-16 07:59:45.750382 | orchestrator | 2026-04-16 07:59:45.750394 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-16 07:59:45.750406 | orchestrator | Thursday 16 April 2026 07:58:39 +0000 (0:00:01.160) 0:12:45.802 ******** 2026-04-16 07:59:45.750419 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-04-16 07:59:45.750431 | orchestrator | 2026-04-16 07:59:45.750444 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-16 07:59:45.750457 | orchestrator | Thursday 16 April 2026 07:58:42 +0000 (0:00:03.249) 0:12:49.051 ******** 2026-04-16 07:59:45.750469 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.750481 | orchestrator | 2026-04-16 07:59:45.750492 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-16 07:59:45.750502 | orchestrator | Thursday 16 April 2026 07:58:43 +0000 (0:00:01.156) 0:12:50.208 ******** 2026-04-16 07:59:45.750513 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750524 | orchestrator | 2026-04-16 07:59:45.750535 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-16 07:59:45.750545 | orchestrator | Thursday 16 April 2026 07:58:44 +0000 (0:00:01.161) 0:12:51.369 ******** 2026-04-16 07:59:45.750556 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750567 | orchestrator | 2026-04-16 07:59:45.750578 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-16 07:59:45.750589 | orchestrator | Thursday 16 April 2026 07:58:45 +0000 (0:00:01.150) 0:12:52.519 ******** 2026-04-16 07:59:45.750608 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:59:45.750619 | orchestrator | 2026-04-16 07:59:45.750630 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-16 07:59:45.750660 | orchestrator | Thursday 16 April 2026 07:58:47 +0000 (0:00:01.991) 0:12:54.511 ******** 2026-04-16 07:59:45.750671 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750682 | orchestrator | 2026-04-16 07:59:45.750692 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-16 07:59:45.750703 | orchestrator | Thursday 16 April 2026 07:58:49 +0000 (0:00:01.578) 0:12:56.089 ******** 2026-04-16 07:59:45.750714 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750724 | orchestrator | 2026-04-16 07:59:45.750735 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-16 07:59:45.750746 | orchestrator | Thursday 16 April 2026 07:58:50 +0000 (0:00:01.486) 0:12:57.576 ******** 2026-04-16 07:59:45.750756 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.750767 | orchestrator | 2026-04-16 07:59:45.750778 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-16 07:59:45.750788 | orchestrator | Thursday 16 April 2026 07:58:52 +0000 (0:00:01.503) 0:12:59.079 ******** 2026-04-16 07:59:45.750799 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:59:45.750810 | orchestrator | 2026-04-16 07:59:45.750841 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-16 07:59:45.750853 | orchestrator | Thursday 16 April 2026 07:58:53 +0000 (0:00:01.569) 0:13:00.649 ******** 2026-04-16 07:59:45.750864 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-16 07:59:45.750874 | orchestrator | 2026-04-16 07:59:45.750885 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-16 07:59:45.750896 | orchestrator | Thursday 16 April 2026 07:58:55 +0000 (0:00:01.588) 0:13:02.237 ******** 2026-04-16 07:59:45.750907 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 07:59:45.750918 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-16 07:59:45.750929 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 07:59:45.750939 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-04-16 07:59:45.750950 | orchestrator | 2026-04-16 07:59:45.750961 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-16 07:59:45.750977 | orchestrator | Thursday 16 April 2026 07:58:59 +0000 (0:00:03.852) 0:13:06.090 ******** 2026-04-16 07:59:45.750988 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:59:45.750999 | orchestrator | 2026-04-16 07:59:45.751010 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-16 07:59:45.751021 | orchestrator | Thursday 16 April 2026 07:59:01 +0000 (0:00:01.992) 0:13:08.083 ******** 2026-04-16 07:59:45.751032 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751043 | orchestrator | 2026-04-16 07:59:45.751054 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-16 07:59:45.751064 | orchestrator | Thursday 16 April 2026 07:59:02 +0000 (0:00:01.105) 0:13:09.188 ******** 2026-04-16 07:59:45.751075 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751086 | orchestrator | 2026-04-16 07:59:45.751097 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-16 07:59:45.751108 | orchestrator | Thursday 16 April 2026 07:59:03 +0000 (0:00:01.126) 0:13:10.315 ******** 2026-04-16 07:59:45.751118 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751129 | orchestrator | 2026-04-16 07:59:45.751140 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-16 07:59:45.751151 | orchestrator | Thursday 16 April 2026 07:59:05 +0000 (0:00:01.796) 0:13:12.111 ******** 2026-04-16 07:59:45.751161 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751172 | orchestrator | 2026-04-16 07:59:45.751183 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-16 07:59:45.751194 | orchestrator | Thursday 16 April 2026 07:59:06 +0000 (0:00:01.485) 0:13:13.596 ******** 2026-04-16 07:59:45.751211 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.751222 | orchestrator | 2026-04-16 07:59:45.751233 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-16 07:59:45.751244 | orchestrator | Thursday 16 April 2026 07:59:07 +0000 (0:00:00.765) 0:13:14.362 ******** 2026-04-16 07:59:45.751255 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-04-16 07:59:45.751266 | orchestrator | 2026-04-16 07:59:45.751276 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-16 07:59:45.751287 | orchestrator | Thursday 16 April 2026 07:59:08 +0000 (0:00:01.114) 0:13:15.476 ******** 2026-04-16 07:59:45.751298 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.751308 | orchestrator | 2026-04-16 07:59:45.751319 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-16 07:59:45.751330 | orchestrator | Thursday 16 April 2026 07:59:09 +0000 (0:00:01.106) 0:13:16.583 ******** 2026-04-16 07:59:45.751341 | orchestrator | skipping: [testbed-node-1] 2026-04-16 07:59:45.751351 | orchestrator | 2026-04-16 07:59:45.751362 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-16 07:59:45.751373 | orchestrator | Thursday 16 April 2026 07:59:10 +0000 (0:00:01.109) 0:13:17.692 ******** 2026-04-16 07:59:45.751384 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-04-16 07:59:45.751394 | orchestrator | 2026-04-16 07:59:45.751405 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-16 07:59:45.751416 | orchestrator | Thursday 16 April 2026 07:59:12 +0000 (0:00:01.130) 0:13:18.823 ******** 2026-04-16 07:59:45.751427 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751438 | orchestrator | 2026-04-16 07:59:45.751448 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-16 07:59:45.751459 | orchestrator | Thursday 16 April 2026 07:59:14 +0000 (0:00:02.356) 0:13:21.180 ******** 2026-04-16 07:59:45.751470 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751481 | orchestrator | 2026-04-16 07:59:45.751491 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-16 07:59:45.751502 | orchestrator | Thursday 16 April 2026 07:59:16 +0000 (0:00:01.960) 0:13:23.141 ******** 2026-04-16 07:59:45.751513 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751524 | orchestrator | 2026-04-16 07:59:45.751535 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-16 07:59:45.751545 | orchestrator | Thursday 16 April 2026 07:59:18 +0000 (0:00:02.406) 0:13:25.547 ******** 2026-04-16 07:59:45.751556 | orchestrator | changed: [testbed-node-1] 2026-04-16 07:59:45.751567 | orchestrator | 2026-04-16 07:59:45.751578 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-16 07:59:45.751589 | orchestrator | Thursday 16 April 2026 07:59:21 +0000 (0:00:02.908) 0:13:28.455 ******** 2026-04-16 07:59:45.751599 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-04-16 07:59:45.751610 | orchestrator | 2026-04-16 07:59:45.751621 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-16 07:59:45.751674 | orchestrator | Thursday 16 April 2026 07:59:22 +0000 (0:00:01.130) 0:13:29.586 ******** 2026-04-16 07:59:45.751686 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-16 07:59:45.751698 | orchestrator | ok: [testbed-node-1] 2026-04-16 07:59:45.751709 | orchestrator | 2026-04-16 07:59:45.751720 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-16 07:59:45.751738 | orchestrator | Thursday 16 April 2026 07:59:45 +0000 (0:00:22.910) 0:13:52.497 ******** 2026-04-16 08:00:27.706832 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:00:27.706954 | orchestrator | 2026-04-16 08:00:27.706970 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-16 08:00:27.706983 | orchestrator | Thursday 16 April 2026 07:59:48 +0000 (0:00:02.698) 0:13:55.195 ******** 2026-04-16 08:00:27.707014 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:00:27.707025 | orchestrator | 2026-04-16 08:00:27.707037 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-16 08:00:27.707053 | orchestrator | Thursday 16 April 2026 07:59:49 +0000 (0:00:00.768) 0:13:55.963 ******** 2026-04-16 08:00:27.707089 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-16 08:00:27.707111 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-16 08:00:27.707128 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-16 08:00:27.707145 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-16 08:00:27.707163 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-16 08:00:27.707181 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}])  2026-04-16 08:00:27.707198 | orchestrator | 2026-04-16 08:00:27.707214 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-16 08:00:27.707230 | orchestrator | Thursday 16 April 2026 07:59:58 +0000 (0:00:09.626) 0:14:05.590 ******** 2026-04-16 08:00:27.707245 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:00:27.707263 | orchestrator | 2026-04-16 08:00:27.707278 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:00:27.707296 | orchestrator | Thursday 16 April 2026 08:00:00 +0000 (0:00:02.149) 0:14:07.739 ******** 2026-04-16 08:00:27.707313 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:00:27.707331 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-16 08:00:27.707345 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-16 08:00:27.707356 | orchestrator | 2026-04-16 08:00:27.707368 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:00:27.707380 | orchestrator | Thursday 16 April 2026 08:00:02 +0000 (0:00:01.495) 0:14:09.235 ******** 2026-04-16 08:00:27.707391 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 08:00:27.707414 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 08:00:27.707426 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 08:00:27.707437 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:00:27.707447 | orchestrator | 2026-04-16 08:00:27.707460 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-16 08:00:27.707471 | orchestrator | Thursday 16 April 2026 08:00:03 +0000 (0:00:01.013) 0:14:10.248 ******** 2026-04-16 08:00:27.707482 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:00:27.707493 | orchestrator | 2026-04-16 08:00:27.707504 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-16 08:00:27.707535 | orchestrator | Thursday 16 April 2026 08:00:04 +0000 (0:00:00.755) 0:14:11.004 ******** 2026-04-16 08:00:27.707547 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:00:27.707558 | orchestrator | 2026-04-16 08:00:27.707570 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-04-16 08:00:27.707580 | orchestrator | 2026-04-16 08:00:27.707591 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-04-16 08:00:27.707602 | orchestrator | Thursday 16 April 2026 08:00:06 +0000 (0:00:02.185) 0:14:13.190 ******** 2026-04-16 08:00:27.707613 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707625 | orchestrator | 2026-04-16 08:00:27.707634 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-04-16 08:00:27.707644 | orchestrator | Thursday 16 April 2026 08:00:07 +0000 (0:00:01.118) 0:14:14.309 ******** 2026-04-16 08:00:27.707653 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707718 | orchestrator | 2026-04-16 08:00:27.707727 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-04-16 08:00:27.707742 | orchestrator | Thursday 16 April 2026 08:00:08 +0000 (0:00:00.817) 0:14:15.127 ******** 2026-04-16 08:00:27.707750 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:27.707758 | orchestrator | 2026-04-16 08:00:27.707766 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-04-16 08:00:27.707774 | orchestrator | Thursday 16 April 2026 08:00:09 +0000 (0:00:00.749) 0:14:15.876 ******** 2026-04-16 08:00:27.707782 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707790 | orchestrator | 2026-04-16 08:00:27.707798 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:00:27.707806 | orchestrator | Thursday 16 April 2026 08:00:09 +0000 (0:00:00.776) 0:14:16.652 ******** 2026-04-16 08:00:27.707814 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-16 08:00:27.707822 | orchestrator | 2026-04-16 08:00:27.707830 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:00:27.707838 | orchestrator | Thursday 16 April 2026 08:00:11 +0000 (0:00:01.249) 0:14:17.901 ******** 2026-04-16 08:00:27.707846 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707854 | orchestrator | 2026-04-16 08:00:27.707862 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:00:27.707870 | orchestrator | Thursday 16 April 2026 08:00:12 +0000 (0:00:01.442) 0:14:19.344 ******** 2026-04-16 08:00:27.707878 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707886 | orchestrator | 2026-04-16 08:00:27.707894 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:00:27.707902 | orchestrator | Thursday 16 April 2026 08:00:13 +0000 (0:00:01.109) 0:14:20.454 ******** 2026-04-16 08:00:27.707910 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707918 | orchestrator | 2026-04-16 08:00:27.707926 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:00:27.707934 | orchestrator | Thursday 16 April 2026 08:00:15 +0000 (0:00:01.446) 0:14:21.901 ******** 2026-04-16 08:00:27.707942 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707950 | orchestrator | 2026-04-16 08:00:27.707958 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:00:27.707966 | orchestrator | Thursday 16 April 2026 08:00:16 +0000 (0:00:01.122) 0:14:23.023 ******** 2026-04-16 08:00:27.707985 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.707999 | orchestrator | 2026-04-16 08:00:27.708012 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:00:27.708026 | orchestrator | Thursday 16 April 2026 08:00:17 +0000 (0:00:01.120) 0:14:24.143 ******** 2026-04-16 08:00:27.708038 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.708052 | orchestrator | 2026-04-16 08:00:27.708066 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:00:27.708079 | orchestrator | Thursday 16 April 2026 08:00:18 +0000 (0:00:01.124) 0:14:25.268 ******** 2026-04-16 08:00:27.708092 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:27.708105 | orchestrator | 2026-04-16 08:00:27.708113 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:00:27.708121 | orchestrator | Thursday 16 April 2026 08:00:19 +0000 (0:00:01.130) 0:14:26.398 ******** 2026-04-16 08:00:27.708129 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.708137 | orchestrator | 2026-04-16 08:00:27.708144 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:00:27.708152 | orchestrator | Thursday 16 April 2026 08:00:20 +0000 (0:00:01.161) 0:14:27.560 ******** 2026-04-16 08:00:27.708160 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:00:27.708168 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:00:27.708176 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:00:27.708184 | orchestrator | 2026-04-16 08:00:27.708192 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:00:27.708199 | orchestrator | Thursday 16 April 2026 08:00:22 +0000 (0:00:01.924) 0:14:29.485 ******** 2026-04-16 08:00:27.708209 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:27.708223 | orchestrator | 2026-04-16 08:00:27.708236 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:00:27.708248 | orchestrator | Thursday 16 April 2026 08:00:23 +0000 (0:00:01.224) 0:14:30.709 ******** 2026-04-16 08:00:27.708261 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:00:27.708274 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:00:27.708287 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:00:27.708299 | orchestrator | 2026-04-16 08:00:27.708313 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:00:27.708326 | orchestrator | Thursday 16 April 2026 08:00:27 +0000 (0:00:03.119) 0:14:33.829 ******** 2026-04-16 08:00:27.708339 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:00:27.708352 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:00:27.708366 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:00:27.708390 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.000795 | orchestrator | 2026-04-16 08:00:51.000892 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:00:51.000906 | orchestrator | Thursday 16 April 2026 08:00:28 +0000 (0:00:01.711) 0:14:35.541 ******** 2026-04-16 08:00:51.000917 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:00:51.000944 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:00:51.000954 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:00:51.000987 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.000998 | orchestrator | 2026-04-16 08:00:51.001007 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:00:51.001017 | orchestrator | Thursday 16 April 2026 08:00:30 +0000 (0:00:01.906) 0:14:37.448 ******** 2026-04-16 08:00:51.001027 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:00:51.001040 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:00:51.001049 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:00:51.001058 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001067 | orchestrator | 2026-04-16 08:00:51.001076 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:00:51.001085 | orchestrator | Thursday 16 April 2026 08:00:31 +0000 (0:00:01.183) 0:14:38.632 ******** 2026-04-16 08:00:51.001096 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:00:24.477207', 'end': '2026-04-16 08:00:24.524005', 'delta': '0:00:00.046798', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:00:51.001110 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:00:25.323523', 'end': '2026-04-16 08:00:25.360449', 'delta': '0:00:00.036926', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:00:51.001142 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '8eb997055eb5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:00:25.872561', 'end': '2026-04-16 08:00:25.927858', 'delta': '0:00:00.055297', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8eb997055eb5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:00:51.001159 | orchestrator | 2026-04-16 08:00:51.001169 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:00:51.001178 | orchestrator | Thursday 16 April 2026 08:00:33 +0000 (0:00:01.190) 0:14:39.823 ******** 2026-04-16 08:00:51.001187 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:51.001197 | orchestrator | 2026-04-16 08:00:51.001206 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:00:51.001215 | orchestrator | Thursday 16 April 2026 08:00:34 +0000 (0:00:01.231) 0:14:41.054 ******** 2026-04-16 08:00:51.001224 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001233 | orchestrator | 2026-04-16 08:00:51.001242 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:00:51.001251 | orchestrator | Thursday 16 April 2026 08:00:35 +0000 (0:00:01.256) 0:14:42.311 ******** 2026-04-16 08:00:51.001259 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:51.001268 | orchestrator | 2026-04-16 08:00:51.001277 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:00:51.001286 | orchestrator | Thursday 16 April 2026 08:00:36 +0000 (0:00:01.183) 0:14:43.495 ******** 2026-04-16 08:00:51.001295 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:00:51.001305 | orchestrator | 2026-04-16 08:00:51.001315 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:00:51.001327 | orchestrator | Thursday 16 April 2026 08:00:39 +0000 (0:00:02.945) 0:14:46.441 ******** 2026-04-16 08:00:51.001337 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:00:51.001349 | orchestrator | 2026-04-16 08:00:51.001360 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:00:51.001371 | orchestrator | Thursday 16 April 2026 08:00:40 +0000 (0:00:01.159) 0:14:47.600 ******** 2026-04-16 08:00:51.001382 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001392 | orchestrator | 2026-04-16 08:00:51.001403 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:00:51.001414 | orchestrator | Thursday 16 April 2026 08:00:41 +0000 (0:00:01.093) 0:14:48.694 ******** 2026-04-16 08:00:51.001425 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001436 | orchestrator | 2026-04-16 08:00:51.001447 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:00:51.001458 | orchestrator | Thursday 16 April 2026 08:00:43 +0000 (0:00:01.221) 0:14:49.916 ******** 2026-04-16 08:00:51.001469 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001479 | orchestrator | 2026-04-16 08:00:51.001490 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:00:51.001501 | orchestrator | Thursday 16 April 2026 08:00:44 +0000 (0:00:01.120) 0:14:51.036 ******** 2026-04-16 08:00:51.001512 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001523 | orchestrator | 2026-04-16 08:00:51.001534 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:00:51.001544 | orchestrator | Thursday 16 April 2026 08:00:45 +0000 (0:00:01.107) 0:14:52.144 ******** 2026-04-16 08:00:51.001555 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001566 | orchestrator | 2026-04-16 08:00:51.001577 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:00:51.001588 | orchestrator | Thursday 16 April 2026 08:00:46 +0000 (0:00:01.095) 0:14:53.239 ******** 2026-04-16 08:00:51.001599 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001609 | orchestrator | 2026-04-16 08:00:51.001621 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:00:51.001631 | orchestrator | Thursday 16 April 2026 08:00:47 +0000 (0:00:01.088) 0:14:54.327 ******** 2026-04-16 08:00:51.001642 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001659 | orchestrator | 2026-04-16 08:00:51.001728 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:00:51.001739 | orchestrator | Thursday 16 April 2026 08:00:48 +0000 (0:00:01.108) 0:14:55.436 ******** 2026-04-16 08:00:51.001748 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001757 | orchestrator | 2026-04-16 08:00:51.001766 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:00:51.001775 | orchestrator | Thursday 16 April 2026 08:00:49 +0000 (0:00:01.091) 0:14:56.528 ******** 2026-04-16 08:00:51.001784 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:51.001793 | orchestrator | 2026-04-16 08:00:51.001802 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:00:51.001810 | orchestrator | Thursday 16 April 2026 08:00:50 +0000 (0:00:01.136) 0:14:57.664 ******** 2026-04-16 08:00:51.001827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:00:52.242492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a571ce0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:00:52.242598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:00:52.242623 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:00:52.242635 | orchestrator | 2026-04-16 08:00:52.242649 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:00:52.242661 | orchestrator | Thursday 16 April 2026 08:00:52 +0000 (0:00:01.244) 0:14:58.909 ******** 2026-04-16 08:00:52.242729 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:00:52.242754 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:00:52.242766 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:00:52.242787 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751109 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751212 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751226 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751273 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a571ce0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751292 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751302 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:01:05.751312 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:05.751322 | orchestrator | 2026-04-16 08:01:05.751332 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:01:05.751342 | orchestrator | Thursday 16 April 2026 08:00:53 +0000 (0:00:01.210) 0:15:00.119 ******** 2026-04-16 08:01:05.751352 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:05.751361 | orchestrator | 2026-04-16 08:01:05.751379 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:01:05.751388 | orchestrator | Thursday 16 April 2026 08:00:54 +0000 (0:00:01.495) 0:15:01.614 ******** 2026-04-16 08:01:05.751397 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:05.751405 | orchestrator | 2026-04-16 08:01:05.751414 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:01:05.751423 | orchestrator | Thursday 16 April 2026 08:00:55 +0000 (0:00:01.120) 0:15:02.735 ******** 2026-04-16 08:01:05.751432 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:05.751441 | orchestrator | 2026-04-16 08:01:05.751450 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:01:05.751459 | orchestrator | Thursday 16 April 2026 08:00:57 +0000 (0:00:01.446) 0:15:04.182 ******** 2026-04-16 08:01:05.751468 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:05.751477 | orchestrator | 2026-04-16 08:01:05.751486 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:01:05.751495 | orchestrator | Thursday 16 April 2026 08:00:58 +0000 (0:00:01.155) 0:15:05.337 ******** 2026-04-16 08:01:05.751504 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:05.751513 | orchestrator | 2026-04-16 08:01:05.751522 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:01:05.751531 | orchestrator | Thursday 16 April 2026 08:00:59 +0000 (0:00:01.201) 0:15:06.539 ******** 2026-04-16 08:01:05.751540 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:05.751548 | orchestrator | 2026-04-16 08:01:05.751557 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:01:05.751566 | orchestrator | Thursday 16 April 2026 08:01:00 +0000 (0:00:01.133) 0:15:07.672 ******** 2026-04-16 08:01:05.751575 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-16 08:01:05.751584 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-16 08:01:05.751593 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:01:05.751602 | orchestrator | 2026-04-16 08:01:05.751611 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:01:05.751620 | orchestrator | Thursday 16 April 2026 08:01:02 +0000 (0:00:01.898) 0:15:09.571 ******** 2026-04-16 08:01:05.751630 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:01:05.751641 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:01:05.751652 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:01:05.751663 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:05.751674 | orchestrator | 2026-04-16 08:01:05.751732 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:01:05.751743 | orchestrator | Thursday 16 April 2026 08:01:03 +0000 (0:00:01.168) 0:15:10.740 ******** 2026-04-16 08:01:05.751753 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:05.751763 | orchestrator | 2026-04-16 08:01:05.751774 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:01:05.751783 | orchestrator | Thursday 16 April 2026 08:01:05 +0000 (0:00:01.134) 0:15:11.874 ******** 2026-04-16 08:01:05.751793 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:01:05.751804 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:01:05.751815 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:01:05.751825 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:01:05.751835 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:01:05.751852 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:01:44.107219 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:01:44.107306 | orchestrator | 2026-04-16 08:01:44.107315 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:01:44.107350 | orchestrator | Thursday 16 April 2026 08:01:06 +0000 (0:00:01.807) 0:15:13.682 ******** 2026-04-16 08:01:44.107357 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:01:44.107366 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:01:44.107376 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:01:44.107385 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:01:44.107393 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:01:44.107401 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:01:44.107416 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:01:44.107426 | orchestrator | 2026-04-16 08:01:44.107435 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-04-16 08:01:44.107445 | orchestrator | Thursday 16 April 2026 08:01:09 +0000 (0:00:02.137) 0:15:15.819 ******** 2026-04-16 08:01:44.107454 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107464 | orchestrator | 2026-04-16 08:01:44.107472 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-04-16 08:01:44.107481 | orchestrator | Thursday 16 April 2026 08:01:09 +0000 (0:00:00.828) 0:15:16.648 ******** 2026-04-16 08:01:44.107491 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107500 | orchestrator | 2026-04-16 08:01:44.107509 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-04-16 08:01:44.107519 | orchestrator | Thursday 16 April 2026 08:01:10 +0000 (0:00:00.880) 0:15:17.528 ******** 2026-04-16 08:01:44.107528 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107536 | orchestrator | 2026-04-16 08:01:44.107546 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-04-16 08:01:44.107555 | orchestrator | Thursday 16 April 2026 08:01:11 +0000 (0:00:00.772) 0:15:18.300 ******** 2026-04-16 08:01:44.107564 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107572 | orchestrator | 2026-04-16 08:01:44.107581 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-04-16 08:01:44.107590 | orchestrator | Thursday 16 April 2026 08:01:12 +0000 (0:00:00.838) 0:15:19.139 ******** 2026-04-16 08:01:44.107596 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107602 | orchestrator | 2026-04-16 08:01:44.107607 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-04-16 08:01:44.107613 | orchestrator | Thursday 16 April 2026 08:01:13 +0000 (0:00:00.810) 0:15:19.949 ******** 2026-04-16 08:01:44.107619 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:01:44.107625 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:01:44.107631 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:01:44.107637 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107643 | orchestrator | 2026-04-16 08:01:44.107648 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-04-16 08:01:44.107654 | orchestrator | Thursday 16 April 2026 08:01:14 +0000 (0:00:01.043) 0:15:20.993 ******** 2026-04-16 08:01:44.107660 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-04-16 08:01:44.107666 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-04-16 08:01:44.107672 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-04-16 08:01:44.107677 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-04-16 08:01:44.107683 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-04-16 08:01:44.107689 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-04-16 08:01:44.107757 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107765 | orchestrator | 2026-04-16 08:01:44.107770 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-04-16 08:01:44.107776 | orchestrator | Thursday 16 April 2026 08:01:15 +0000 (0:00:01.637) 0:15:22.630 ******** 2026-04-16 08:01:44.107783 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:01:44.107791 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:01:44.107798 | orchestrator | 2026-04-16 08:01:44.107804 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-04-16 08:01:44.107811 | orchestrator | Thursday 16 April 2026 08:01:19 +0000 (0:00:03.212) 0:15:25.842 ******** 2026-04-16 08:01:44.107818 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:01:44.107824 | orchestrator | 2026-04-16 08:01:44.107831 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:01:44.107838 | orchestrator | Thursday 16 April 2026 08:01:21 +0000 (0:00:02.136) 0:15:27.979 ******** 2026-04-16 08:01:44.107845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-16 08:01:44.107852 | orchestrator | 2026-04-16 08:01:44.107859 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:01:44.107865 | orchestrator | Thursday 16 April 2026 08:01:22 +0000 (0:00:01.205) 0:15:29.185 ******** 2026-04-16 08:01:44.107872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-16 08:01:44.107878 | orchestrator | 2026-04-16 08:01:44.107885 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:01:44.107905 | orchestrator | Thursday 16 April 2026 08:01:23 +0000 (0:00:01.106) 0:15:30.291 ******** 2026-04-16 08:01:44.107912 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.107918 | orchestrator | 2026-04-16 08:01:44.107925 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:01:44.107937 | orchestrator | Thursday 16 April 2026 08:01:25 +0000 (0:00:01.500) 0:15:31.792 ******** 2026-04-16 08:01:44.107944 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107951 | orchestrator | 2026-04-16 08:01:44.107958 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:01:44.107965 | orchestrator | Thursday 16 April 2026 08:01:26 +0000 (0:00:01.096) 0:15:32.889 ******** 2026-04-16 08:01:44.107971 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.107978 | orchestrator | 2026-04-16 08:01:44.107985 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:01:44.107991 | orchestrator | Thursday 16 April 2026 08:01:27 +0000 (0:00:01.108) 0:15:33.997 ******** 2026-04-16 08:01:44.107998 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108006 | orchestrator | 2026-04-16 08:01:44.108016 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:01:44.108025 | orchestrator | Thursday 16 April 2026 08:01:28 +0000 (0:00:01.082) 0:15:35.079 ******** 2026-04-16 08:01:44.108033 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108042 | orchestrator | 2026-04-16 08:01:44.108051 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:01:44.108060 | orchestrator | Thursday 16 April 2026 08:01:29 +0000 (0:00:01.558) 0:15:36.638 ******** 2026-04-16 08:01:44.108070 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108080 | orchestrator | 2026-04-16 08:01:44.108090 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:01:44.108101 | orchestrator | Thursday 16 April 2026 08:01:30 +0000 (0:00:01.113) 0:15:37.752 ******** 2026-04-16 08:01:44.108112 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108122 | orchestrator | 2026-04-16 08:01:44.108133 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:01:44.108144 | orchestrator | Thursday 16 April 2026 08:01:32 +0000 (0:00:01.151) 0:15:38.903 ******** 2026-04-16 08:01:44.108154 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108176 | orchestrator | 2026-04-16 08:01:44.108187 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:01:44.108197 | orchestrator | Thursday 16 April 2026 08:01:33 +0000 (0:00:01.591) 0:15:40.494 ******** 2026-04-16 08:01:44.108208 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108218 | orchestrator | 2026-04-16 08:01:44.108229 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:01:44.108239 | orchestrator | Thursday 16 April 2026 08:01:35 +0000 (0:00:01.649) 0:15:42.144 ******** 2026-04-16 08:01:44.108250 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108260 | orchestrator | 2026-04-16 08:01:44.108271 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:01:44.108281 | orchestrator | Thursday 16 April 2026 08:01:36 +0000 (0:00:00.767) 0:15:42.912 ******** 2026-04-16 08:01:44.108292 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108302 | orchestrator | 2026-04-16 08:01:44.108313 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:01:44.108323 | orchestrator | Thursday 16 April 2026 08:01:36 +0000 (0:00:00.795) 0:15:43.707 ******** 2026-04-16 08:01:44.108334 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108344 | orchestrator | 2026-04-16 08:01:44.108355 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:01:44.108365 | orchestrator | Thursday 16 April 2026 08:01:37 +0000 (0:00:00.757) 0:15:44.465 ******** 2026-04-16 08:01:44.108376 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108386 | orchestrator | 2026-04-16 08:01:44.108397 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:01:44.108408 | orchestrator | Thursday 16 April 2026 08:01:38 +0000 (0:00:00.771) 0:15:45.236 ******** 2026-04-16 08:01:44.108418 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108429 | orchestrator | 2026-04-16 08:01:44.108439 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:01:44.108450 | orchestrator | Thursday 16 April 2026 08:01:39 +0000 (0:00:00.777) 0:15:46.014 ******** 2026-04-16 08:01:44.108460 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108471 | orchestrator | 2026-04-16 08:01:44.108481 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:01:44.108492 | orchestrator | Thursday 16 April 2026 08:01:40 +0000 (0:00:00.785) 0:15:46.800 ******** 2026-04-16 08:01:44.108502 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108513 | orchestrator | 2026-04-16 08:01:44.108523 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:01:44.108534 | orchestrator | Thursday 16 April 2026 08:01:40 +0000 (0:00:00.790) 0:15:47.590 ******** 2026-04-16 08:01:44.108544 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108554 | orchestrator | 2026-04-16 08:01:44.108564 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:01:44.108575 | orchestrator | Thursday 16 April 2026 08:01:41 +0000 (0:00:00.778) 0:15:48.368 ******** 2026-04-16 08:01:44.108585 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108594 | orchestrator | 2026-04-16 08:01:44.108603 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:01:44.108613 | orchestrator | Thursday 16 April 2026 08:01:42 +0000 (0:00:00.795) 0:15:49.164 ******** 2026-04-16 08:01:44.108623 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:01:44.108632 | orchestrator | 2026-04-16 08:01:44.108642 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:01:44.108652 | orchestrator | Thursday 16 April 2026 08:01:43 +0000 (0:00:00.792) 0:15:49.956 ******** 2026-04-16 08:01:44.108661 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108670 | orchestrator | 2026-04-16 08:01:44.108679 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:01:44.108690 | orchestrator | Thursday 16 April 2026 08:01:43 +0000 (0:00:00.763) 0:15:50.720 ******** 2026-04-16 08:01:44.108718 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:01:44.108735 | orchestrator | 2026-04-16 08:01:44.108748 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:02:25.975315 | orchestrator | Thursday 16 April 2026 08:01:44 +0000 (0:00:00.765) 0:15:51.486 ******** 2026-04-16 08:02:25.975423 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975437 | orchestrator | 2026-04-16 08:02:25.975461 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:02:25.975471 | orchestrator | Thursday 16 April 2026 08:01:45 +0000 (0:00:00.798) 0:15:52.284 ******** 2026-04-16 08:02:25.975480 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975488 | orchestrator | 2026-04-16 08:02:25.975497 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:02:25.975506 | orchestrator | Thursday 16 April 2026 08:01:46 +0000 (0:00:00.754) 0:15:53.038 ******** 2026-04-16 08:02:25.975515 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975524 | orchestrator | 2026-04-16 08:02:25.975532 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:02:25.975541 | orchestrator | Thursday 16 April 2026 08:01:47 +0000 (0:00:00.760) 0:15:53.799 ******** 2026-04-16 08:02:25.975550 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975558 | orchestrator | 2026-04-16 08:02:25.975567 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:02:25.975576 | orchestrator | Thursday 16 April 2026 08:01:47 +0000 (0:00:00.757) 0:15:54.556 ******** 2026-04-16 08:02:25.975585 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975594 | orchestrator | 2026-04-16 08:02:25.975603 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:02:25.975612 | orchestrator | Thursday 16 April 2026 08:01:48 +0000 (0:00:00.755) 0:15:55.312 ******** 2026-04-16 08:02:25.975621 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975630 | orchestrator | 2026-04-16 08:02:25.975638 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:02:25.975647 | orchestrator | Thursday 16 April 2026 08:01:49 +0000 (0:00:00.770) 0:15:56.083 ******** 2026-04-16 08:02:25.975656 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975664 | orchestrator | 2026-04-16 08:02:25.975673 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:02:25.975682 | orchestrator | Thursday 16 April 2026 08:01:50 +0000 (0:00:00.748) 0:15:56.831 ******** 2026-04-16 08:02:25.975691 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975700 | orchestrator | 2026-04-16 08:02:25.975708 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:02:25.975745 | orchestrator | Thursday 16 April 2026 08:01:50 +0000 (0:00:00.801) 0:15:57.633 ******** 2026-04-16 08:02:25.975755 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975764 | orchestrator | 2026-04-16 08:02:25.975772 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:02:25.975781 | orchestrator | Thursday 16 April 2026 08:01:51 +0000 (0:00:00.754) 0:15:58.387 ******** 2026-04-16 08:02:25.975790 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975798 | orchestrator | 2026-04-16 08:02:25.975807 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:02:25.975816 | orchestrator | Thursday 16 April 2026 08:01:52 +0000 (0:00:00.759) 0:15:59.146 ******** 2026-04-16 08:02:25.975825 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:25.975835 | orchestrator | 2026-04-16 08:02:25.975843 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:02:25.975853 | orchestrator | Thursday 16 April 2026 08:01:53 +0000 (0:00:01.593) 0:16:00.740 ******** 2026-04-16 08:02:25.975863 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:25.975873 | orchestrator | 2026-04-16 08:02:25.975884 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:02:25.975894 | orchestrator | Thursday 16 April 2026 08:01:56 +0000 (0:00:02.084) 0:16:02.825 ******** 2026-04-16 08:02:25.975904 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-16 08:02:25.975945 | orchestrator | 2026-04-16 08:02:25.975956 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:02:25.975966 | orchestrator | Thursday 16 April 2026 08:01:57 +0000 (0:00:01.148) 0:16:03.974 ******** 2026-04-16 08:02:25.975976 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.975986 | orchestrator | 2026-04-16 08:02:25.975997 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:02:25.976007 | orchestrator | Thursday 16 April 2026 08:01:58 +0000 (0:00:01.134) 0:16:05.109 ******** 2026-04-16 08:02:25.976017 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976027 | orchestrator | 2026-04-16 08:02:25.976037 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:02:25.976047 | orchestrator | Thursday 16 April 2026 08:01:59 +0000 (0:00:01.125) 0:16:06.234 ******** 2026-04-16 08:02:25.976058 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:02:25.976068 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:02:25.976078 | orchestrator | 2026-04-16 08:02:25.976098 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:02:25.976108 | orchestrator | Thursday 16 April 2026 08:02:01 +0000 (0:00:01.836) 0:16:08.071 ******** 2026-04-16 08:02:25.976118 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:25.976128 | orchestrator | 2026-04-16 08:02:25.976139 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:02:25.976149 | orchestrator | Thursday 16 April 2026 08:02:02 +0000 (0:00:01.459) 0:16:09.530 ******** 2026-04-16 08:02:25.976159 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976170 | orchestrator | 2026-04-16 08:02:25.976180 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:02:25.976190 | orchestrator | Thursday 16 April 2026 08:02:03 +0000 (0:00:01.159) 0:16:10.690 ******** 2026-04-16 08:02:25.976200 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976210 | orchestrator | 2026-04-16 08:02:25.976220 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:02:25.976246 | orchestrator | Thursday 16 April 2026 08:02:04 +0000 (0:00:00.800) 0:16:11.491 ******** 2026-04-16 08:02:25.976255 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976264 | orchestrator | 2026-04-16 08:02:25.976278 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:02:25.976287 | orchestrator | Thursday 16 April 2026 08:02:05 +0000 (0:00:00.779) 0:16:12.270 ******** 2026-04-16 08:02:25.976295 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-16 08:02:25.976304 | orchestrator | 2026-04-16 08:02:25.976313 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:02:25.976321 | orchestrator | Thursday 16 April 2026 08:02:06 +0000 (0:00:01.126) 0:16:13.396 ******** 2026-04-16 08:02:25.976330 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:25.976339 | orchestrator | 2026-04-16 08:02:25.976348 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:02:25.976356 | orchestrator | Thursday 16 April 2026 08:02:08 +0000 (0:00:01.769) 0:16:15.166 ******** 2026-04-16 08:02:25.976365 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:02:25.976374 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:02:25.976383 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:02:25.976391 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976400 | orchestrator | 2026-04-16 08:02:25.976409 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:02:25.976417 | orchestrator | Thursday 16 April 2026 08:02:09 +0000 (0:00:01.151) 0:16:16.317 ******** 2026-04-16 08:02:25.976426 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976442 | orchestrator | 2026-04-16 08:02:25.976450 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:02:25.976459 | orchestrator | Thursday 16 April 2026 08:02:10 +0000 (0:00:01.127) 0:16:17.445 ******** 2026-04-16 08:02:25.976468 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976477 | orchestrator | 2026-04-16 08:02:25.976486 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:02:25.976494 | orchestrator | Thursday 16 April 2026 08:02:11 +0000 (0:00:01.136) 0:16:18.581 ******** 2026-04-16 08:02:25.976503 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976512 | orchestrator | 2026-04-16 08:02:25.976520 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:02:25.976529 | orchestrator | Thursday 16 April 2026 08:02:12 +0000 (0:00:01.123) 0:16:19.704 ******** 2026-04-16 08:02:25.976538 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976547 | orchestrator | 2026-04-16 08:02:25.976555 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:02:25.976564 | orchestrator | Thursday 16 April 2026 08:02:14 +0000 (0:00:01.130) 0:16:20.835 ******** 2026-04-16 08:02:25.976573 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976582 | orchestrator | 2026-04-16 08:02:25.976590 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:02:25.976599 | orchestrator | Thursday 16 April 2026 08:02:14 +0000 (0:00:00.777) 0:16:21.613 ******** 2026-04-16 08:02:25.976608 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:25.976616 | orchestrator | 2026-04-16 08:02:25.976625 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:02:25.976634 | orchestrator | Thursday 16 April 2026 08:02:17 +0000 (0:00:02.289) 0:16:23.902 ******** 2026-04-16 08:02:25.976643 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:25.976651 | orchestrator | 2026-04-16 08:02:25.976660 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:02:25.976669 | orchestrator | Thursday 16 April 2026 08:02:17 +0000 (0:00:00.774) 0:16:24.676 ******** 2026-04-16 08:02:25.976678 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-16 08:02:25.976686 | orchestrator | 2026-04-16 08:02:25.976695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:02:25.976704 | orchestrator | Thursday 16 April 2026 08:02:19 +0000 (0:00:01.130) 0:16:25.806 ******** 2026-04-16 08:02:25.976712 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976766 | orchestrator | 2026-04-16 08:02:25.976776 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:02:25.976784 | orchestrator | Thursday 16 April 2026 08:02:20 +0000 (0:00:01.105) 0:16:26.912 ******** 2026-04-16 08:02:25.976796 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976810 | orchestrator | 2026-04-16 08:02:25.976831 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:02:25.976847 | orchestrator | Thursday 16 April 2026 08:02:21 +0000 (0:00:01.108) 0:16:28.021 ******** 2026-04-16 08:02:25.976861 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976875 | orchestrator | 2026-04-16 08:02:25.976889 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:02:25.976903 | orchestrator | Thursday 16 April 2026 08:02:22 +0000 (0:00:01.101) 0:16:29.123 ******** 2026-04-16 08:02:25.976916 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976930 | orchestrator | 2026-04-16 08:02:25.976945 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:02:25.976959 | orchestrator | Thursday 16 April 2026 08:02:23 +0000 (0:00:01.136) 0:16:30.259 ******** 2026-04-16 08:02:25.976974 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.976988 | orchestrator | 2026-04-16 08:02:25.977002 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:02:25.977016 | orchestrator | Thursday 16 April 2026 08:02:24 +0000 (0:00:01.144) 0:16:31.404 ******** 2026-04-16 08:02:25.977042 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.977057 | orchestrator | 2026-04-16 08:02:25.977071 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:02:25.977086 | orchestrator | Thursday 16 April 2026 08:02:25 +0000 (0:00:01.156) 0:16:32.561 ******** 2026-04-16 08:02:25.977101 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:25.977117 | orchestrator | 2026-04-16 08:02:25.977143 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:02:59.400580 | orchestrator | Thursday 16 April 2026 08:02:26 +0000 (0:00:01.142) 0:16:33.703 ******** 2026-04-16 08:02:59.400670 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400678 | orchestrator | 2026-04-16 08:02:59.400684 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:02:59.400689 | orchestrator | Thursday 16 April 2026 08:02:28 +0000 (0:00:01.143) 0:16:34.846 ******** 2026-04-16 08:02:59.400694 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:59.400700 | orchestrator | 2026-04-16 08:02:59.400705 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:02:59.400710 | orchestrator | Thursday 16 April 2026 08:02:28 +0000 (0:00:00.788) 0:16:35.635 ******** 2026-04-16 08:02:59.400715 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-16 08:02:59.400721 | orchestrator | 2026-04-16 08:02:59.400725 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:02:59.400766 | orchestrator | Thursday 16 April 2026 08:02:29 +0000 (0:00:01.105) 0:16:36.741 ******** 2026-04-16 08:02:59.400771 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-16 08:02:59.400777 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-16 08:02:59.400781 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-16 08:02:59.400786 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-16 08:02:59.400791 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-16 08:02:59.400795 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-16 08:02:59.400800 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-16 08:02:59.400805 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:02:59.400810 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:02:59.400814 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:02:59.400819 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:02:59.400824 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:02:59.400828 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:02:59.400833 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:02:59.400837 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-16 08:02:59.400842 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-16 08:02:59.400847 | orchestrator | 2026-04-16 08:02:59.400852 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:02:59.400856 | orchestrator | Thursday 16 April 2026 08:02:36 +0000 (0:00:06.801) 0:16:43.543 ******** 2026-04-16 08:02:59.400861 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400865 | orchestrator | 2026-04-16 08:02:59.400870 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:02:59.400875 | orchestrator | Thursday 16 April 2026 08:02:37 +0000 (0:00:00.769) 0:16:44.313 ******** 2026-04-16 08:02:59.400879 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400884 | orchestrator | 2026-04-16 08:02:59.400888 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:02:59.400893 | orchestrator | Thursday 16 April 2026 08:02:38 +0000 (0:00:00.764) 0:16:45.077 ******** 2026-04-16 08:02:59.400897 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400918 | orchestrator | 2026-04-16 08:02:59.400923 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:02:59.400928 | orchestrator | Thursday 16 April 2026 08:02:39 +0000 (0:00:00.740) 0:16:45.817 ******** 2026-04-16 08:02:59.400932 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400937 | orchestrator | 2026-04-16 08:02:59.400941 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:02:59.400946 | orchestrator | Thursday 16 April 2026 08:02:39 +0000 (0:00:00.755) 0:16:46.572 ******** 2026-04-16 08:02:59.400950 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400955 | orchestrator | 2026-04-16 08:02:59.400959 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:02:59.400964 | orchestrator | Thursday 16 April 2026 08:02:40 +0000 (0:00:00.766) 0:16:47.339 ******** 2026-04-16 08:02:59.400968 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400973 | orchestrator | 2026-04-16 08:02:59.400977 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:02:59.400982 | orchestrator | Thursday 16 April 2026 08:02:41 +0000 (0:00:00.757) 0:16:48.096 ******** 2026-04-16 08:02:59.400987 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.400991 | orchestrator | 2026-04-16 08:02:59.400996 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:02:59.401000 | orchestrator | Thursday 16 April 2026 08:02:42 +0000 (0:00:00.759) 0:16:48.855 ******** 2026-04-16 08:02:59.401005 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401009 | orchestrator | 2026-04-16 08:02:59.401014 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:02:59.401018 | orchestrator | Thursday 16 April 2026 08:02:42 +0000 (0:00:00.797) 0:16:49.653 ******** 2026-04-16 08:02:59.401024 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401031 | orchestrator | 2026-04-16 08:02:59.401039 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:02:59.401046 | orchestrator | Thursday 16 April 2026 08:02:43 +0000 (0:00:00.769) 0:16:50.423 ******** 2026-04-16 08:02:59.401053 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401060 | orchestrator | 2026-04-16 08:02:59.401067 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:02:59.401074 | orchestrator | Thursday 16 April 2026 08:02:44 +0000 (0:00:00.793) 0:16:51.216 ******** 2026-04-16 08:02:59.401081 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401088 | orchestrator | 2026-04-16 08:02:59.401108 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:02:59.401120 | orchestrator | Thursday 16 April 2026 08:02:45 +0000 (0:00:00.793) 0:16:52.009 ******** 2026-04-16 08:02:59.401127 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401134 | orchestrator | 2026-04-16 08:02:59.401149 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:02:59.401158 | orchestrator | Thursday 16 April 2026 08:02:46 +0000 (0:00:00.770) 0:16:52.780 ******** 2026-04-16 08:02:59.401165 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401174 | orchestrator | 2026-04-16 08:02:59.401183 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:02:59.401191 | orchestrator | Thursday 16 April 2026 08:02:46 +0000 (0:00:00.849) 0:16:53.630 ******** 2026-04-16 08:02:59.401199 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401205 | orchestrator | 2026-04-16 08:02:59.401210 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:02:59.401215 | orchestrator | Thursday 16 April 2026 08:02:47 +0000 (0:00:00.748) 0:16:54.379 ******** 2026-04-16 08:02:59.401221 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401226 | orchestrator | 2026-04-16 08:02:59.401233 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:02:59.401241 | orchestrator | Thursday 16 April 2026 08:02:48 +0000 (0:00:00.850) 0:16:55.230 ******** 2026-04-16 08:02:59.401256 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401264 | orchestrator | 2026-04-16 08:02:59.401272 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:02:59.401280 | orchestrator | Thursday 16 April 2026 08:02:49 +0000 (0:00:00.753) 0:16:55.984 ******** 2026-04-16 08:02:59.401288 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401296 | orchestrator | 2026-04-16 08:02:59.401304 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:02:59.401312 | orchestrator | Thursday 16 April 2026 08:02:49 +0000 (0:00:00.742) 0:16:56.726 ******** 2026-04-16 08:02:59.401322 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401327 | orchestrator | 2026-04-16 08:02:59.401332 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:02:59.401338 | orchestrator | Thursday 16 April 2026 08:02:50 +0000 (0:00:00.794) 0:16:57.521 ******** 2026-04-16 08:02:59.401343 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401348 | orchestrator | 2026-04-16 08:02:59.401354 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:02:59.401359 | orchestrator | Thursday 16 April 2026 08:02:51 +0000 (0:00:00.750) 0:16:58.271 ******** 2026-04-16 08:02:59.401364 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401370 | orchestrator | 2026-04-16 08:02:59.401375 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:02:59.401380 | orchestrator | Thursday 16 April 2026 08:02:52 +0000 (0:00:00.747) 0:16:59.018 ******** 2026-04-16 08:02:59.401385 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401390 | orchestrator | 2026-04-16 08:02:59.401396 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:02:59.401401 | orchestrator | Thursday 16 April 2026 08:02:53 +0000 (0:00:00.791) 0:16:59.809 ******** 2026-04-16 08:02:59.401406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:02:59.401412 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:02:59.401417 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:02:59.401422 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401428 | orchestrator | 2026-04-16 08:02:59.401433 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:02:59.401439 | orchestrator | Thursday 16 April 2026 08:02:54 +0000 (0:00:01.025) 0:17:00.835 ******** 2026-04-16 08:02:59.401444 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:02:59.401449 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:02:59.401455 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:02:59.401460 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401465 | orchestrator | 2026-04-16 08:02:59.401470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:02:59.401476 | orchestrator | Thursday 16 April 2026 08:02:55 +0000 (0:00:01.071) 0:17:01.907 ******** 2026-04-16 08:02:59.401481 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:02:59.401486 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:02:59.401491 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:02:59.401496 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401501 | orchestrator | 2026-04-16 08:02:59.401507 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:02:59.401512 | orchestrator | Thursday 16 April 2026 08:02:56 +0000 (0:00:01.034) 0:17:02.942 ******** 2026-04-16 08:02:59.401517 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401523 | orchestrator | 2026-04-16 08:02:59.401528 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:02:59.401533 | orchestrator | Thursday 16 April 2026 08:02:56 +0000 (0:00:00.756) 0:17:03.699 ******** 2026-04-16 08:02:59.401543 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-16 08:02:59.401549 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:02:59.401554 | orchestrator | 2026-04-16 08:02:59.401560 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:02:59.401566 | orchestrator | Thursday 16 April 2026 08:02:57 +0000 (0:00:00.876) 0:17:04.576 ******** 2026-04-16 08:02:59.401571 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:59.401576 | orchestrator | 2026-04-16 08:02:59.401581 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:02:59.401586 | orchestrator | Thursday 16 April 2026 08:02:59 +0000 (0:00:01.417) 0:17:05.994 ******** 2026-04-16 08:02:59.401590 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:02:59.401595 | orchestrator | 2026-04-16 08:02:59.401604 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-04-16 08:04:20.765751 | orchestrator | Thursday 16 April 2026 08:03:00 +0000 (0:00:00.768) 0:17:06.762 ******** 2026-04-16 08:04:20.766000 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-04-16 08:04:20.766101 | orchestrator | 2026-04-16 08:04:20.766127 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-04-16 08:04:20.766148 | orchestrator | Thursday 16 April 2026 08:03:01 +0000 (0:00:01.121) 0:17:07.883 ******** 2026-04-16 08:04:20.766169 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.766191 | orchestrator | 2026-04-16 08:04:20.766211 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-04-16 08:04:20.766230 | orchestrator | Thursday 16 April 2026 08:03:04 +0000 (0:00:03.542) 0:17:11.426 ******** 2026-04-16 08:04:20.766250 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:04:20.766271 | orchestrator | 2026-04-16 08:04:20.766294 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-04-16 08:04:20.766316 | orchestrator | Thursday 16 April 2026 08:03:05 +0000 (0:00:01.214) 0:17:12.641 ******** 2026-04-16 08:04:20.766339 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.766361 | orchestrator | 2026-04-16 08:04:20.766383 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-04-16 08:04:20.766406 | orchestrator | Thursday 16 April 2026 08:03:07 +0000 (0:00:01.116) 0:17:13.758 ******** 2026-04-16 08:04:20.766428 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.766449 | orchestrator | 2026-04-16 08:04:20.766471 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-04-16 08:04:20.766495 | orchestrator | Thursday 16 April 2026 08:03:08 +0000 (0:00:01.155) 0:17:14.914 ******** 2026-04-16 08:04:20.766517 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:04:20.766575 | orchestrator | 2026-04-16 08:04:20.766598 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-04-16 08:04:20.766618 | orchestrator | Thursday 16 April 2026 08:03:10 +0000 (0:00:02.005) 0:17:16.919 ******** 2026-04-16 08:04:20.766639 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.766659 | orchestrator | 2026-04-16 08:04:20.766679 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-04-16 08:04:20.766699 | orchestrator | Thursday 16 April 2026 08:03:11 +0000 (0:00:01.522) 0:17:18.442 ******** 2026-04-16 08:04:20.766718 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.766737 | orchestrator | 2026-04-16 08:04:20.766756 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-04-16 08:04:20.766806 | orchestrator | Thursday 16 April 2026 08:03:13 +0000 (0:00:01.478) 0:17:19.921 ******** 2026-04-16 08:04:20.766826 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.766842 | orchestrator | 2026-04-16 08:04:20.766861 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-04-16 08:04:20.766881 | orchestrator | Thursday 16 April 2026 08:03:14 +0000 (0:00:01.498) 0:17:21.420 ******** 2026-04-16 08:04:20.766897 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:04:20.766916 | orchestrator | 2026-04-16 08:04:20.766935 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-04-16 08:04:20.766986 | orchestrator | Thursday 16 April 2026 08:03:16 +0000 (0:00:01.607) 0:17:23.028 ******** 2026-04-16 08:04:20.767007 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:04:20.767025 | orchestrator | 2026-04-16 08:04:20.767043 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-04-16 08:04:20.767062 | orchestrator | Thursday 16 April 2026 08:03:17 +0000 (0:00:01.546) 0:17:24.574 ******** 2026-04-16 08:04:20.767080 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:04:20.767097 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-04-16 08:04:20.767115 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-16 08:04:20.767133 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-04-16 08:04:20.767151 | orchestrator | 2026-04-16 08:04:20.767171 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-04-16 08:04:20.767192 | orchestrator | Thursday 16 April 2026 08:03:21 +0000 (0:00:03.869) 0:17:28.444 ******** 2026-04-16 08:04:20.767213 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:04:20.767232 | orchestrator | 2026-04-16 08:04:20.767252 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-04-16 08:04:20.767272 | orchestrator | Thursday 16 April 2026 08:03:23 +0000 (0:00:02.059) 0:17:30.504 ******** 2026-04-16 08:04:20.767290 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.767308 | orchestrator | 2026-04-16 08:04:20.767328 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-04-16 08:04:20.767347 | orchestrator | Thursday 16 April 2026 08:03:24 +0000 (0:00:01.125) 0:17:31.630 ******** 2026-04-16 08:04:20.767364 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.767382 | orchestrator | 2026-04-16 08:04:20.767400 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-04-16 08:04:20.767419 | orchestrator | Thursday 16 April 2026 08:03:26 +0000 (0:00:01.149) 0:17:32.780 ******** 2026-04-16 08:04:20.767437 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.767454 | orchestrator | 2026-04-16 08:04:20.767471 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-04-16 08:04:20.767490 | orchestrator | Thursday 16 April 2026 08:03:27 +0000 (0:00:01.874) 0:17:34.654 ******** 2026-04-16 08:04:20.767507 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.767525 | orchestrator | 2026-04-16 08:04:20.767543 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-04-16 08:04:20.767562 | orchestrator | Thursday 16 April 2026 08:03:29 +0000 (0:00:01.438) 0:17:36.093 ******** 2026-04-16 08:04:20.767579 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:04:20.767597 | orchestrator | 2026-04-16 08:04:20.767623 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-04-16 08:04:20.767646 | orchestrator | Thursday 16 April 2026 08:03:30 +0000 (0:00:00.731) 0:17:36.825 ******** 2026-04-16 08:04:20.767662 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-04-16 08:04:20.767680 | orchestrator | 2026-04-16 08:04:20.767728 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-04-16 08:04:20.767759 | orchestrator | Thursday 16 April 2026 08:03:31 +0000 (0:00:01.111) 0:17:37.937 ******** 2026-04-16 08:04:20.767811 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:04:20.767827 | orchestrator | 2026-04-16 08:04:20.767844 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-04-16 08:04:20.767862 | orchestrator | Thursday 16 April 2026 08:03:32 +0000 (0:00:01.085) 0:17:39.022 ******** 2026-04-16 08:04:20.767879 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:04:20.767895 | orchestrator | 2026-04-16 08:04:20.767915 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-04-16 08:04:20.767930 | orchestrator | Thursday 16 April 2026 08:03:33 +0000 (0:00:01.105) 0:17:40.128 ******** 2026-04-16 08:04:20.767945 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-04-16 08:04:20.767978 | orchestrator | 2026-04-16 08:04:20.767995 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-04-16 08:04:20.768012 | orchestrator | Thursday 16 April 2026 08:03:34 +0000 (0:00:01.100) 0:17:41.229 ******** 2026-04-16 08:04:20.768028 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.768045 | orchestrator | 2026-04-16 08:04:20.768061 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-04-16 08:04:20.768078 | orchestrator | Thursday 16 April 2026 08:03:36 +0000 (0:00:02.226) 0:17:43.456 ******** 2026-04-16 08:04:20.768095 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.768111 | orchestrator | 2026-04-16 08:04:20.768128 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-04-16 08:04:20.768145 | orchestrator | Thursday 16 April 2026 08:03:38 +0000 (0:00:01.947) 0:17:45.403 ******** 2026-04-16 08:04:20.768162 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.768179 | orchestrator | 2026-04-16 08:04:20.768196 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-04-16 08:04:20.768212 | orchestrator | Thursday 16 April 2026 08:03:41 +0000 (0:00:02.440) 0:17:47.844 ******** 2026-04-16 08:04:20.768228 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:04:20.768245 | orchestrator | 2026-04-16 08:04:20.768261 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-04-16 08:04:20.768278 | orchestrator | Thursday 16 April 2026 08:03:44 +0000 (0:00:03.100) 0:17:50.944 ******** 2026-04-16 08:04:20.768294 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-04-16 08:04:20.768311 | orchestrator | 2026-04-16 08:04:20.768328 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-04-16 08:04:20.768344 | orchestrator | Thursday 16 April 2026 08:03:45 +0000 (0:00:01.235) 0:17:52.180 ******** 2026-04-16 08:04:20.768362 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-04-16 08:04:20.768378 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.768395 | orchestrator | 2026-04-16 08:04:20.768412 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-04-16 08:04:20.768429 | orchestrator | Thursday 16 April 2026 08:04:08 +0000 (0:00:22.984) 0:18:15.165 ******** 2026-04-16 08:04:20.768445 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:04:20.768462 | orchestrator | 2026-04-16 08:04:20.768479 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-04-16 08:04:20.768496 | orchestrator | Thursday 16 April 2026 08:04:11 +0000 (0:00:02.757) 0:18:17.923 ******** 2026-04-16 08:04:20.768513 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:04:20.768529 | orchestrator | 2026-04-16 08:04:20.768546 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-04-16 08:04:20.768562 | orchestrator | Thursday 16 April 2026 08:04:11 +0000 (0:00:00.757) 0:18:18.681 ******** 2026-04-16 08:04:20.768583 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-04-16 08:04:20.768602 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-04-16 08:04:20.768620 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-04-16 08:04:20.768709 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-04-16 08:04:20.768756 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-04-16 08:05:02.535385 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__0d64cda0f6be5da0135b572d4ac68f62bf42af3a'}])  2026-04-16 08:05:02.535503 | orchestrator | 2026-04-16 08:05:02.535523 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-04-16 08:05:02.535537 | orchestrator | Thursday 16 April 2026 08:04:21 +0000 (0:00:09.562) 0:18:28.243 ******** 2026-04-16 08:05:02.535548 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:05:02.535560 | orchestrator | 2026-04-16 08:05:02.535571 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:05:02.535583 | orchestrator | Thursday 16 April 2026 08:04:23 +0000 (0:00:02.152) 0:18:30.395 ******** 2026-04-16 08:05:02.535594 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:05:02.535606 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-04-16 08:05:02.535617 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-04-16 08:05:02.535628 | orchestrator | 2026-04-16 08:05:02.535639 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:05:02.535650 | orchestrator | Thursday 16 April 2026 08:04:25 +0000 (0:00:01.762) 0:18:32.158 ******** 2026-04-16 08:05:02.535661 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:05:02.535673 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:05:02.535684 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:05:02.535695 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:05:02.535706 | orchestrator | 2026-04-16 08:05:02.535717 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-04-16 08:05:02.535728 | orchestrator | Thursday 16 April 2026 08:04:26 +0000 (0:00:01.028) 0:18:33.186 ******** 2026-04-16 08:05:02.535739 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:05:02.535750 | orchestrator | 2026-04-16 08:05:02.535761 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-04-16 08:05:02.535772 | orchestrator | Thursday 16 April 2026 08:04:27 +0000 (0:00:00.763) 0:18:33.950 ******** 2026-04-16 08:05:02.535783 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:05:02.535909 | orchestrator | 2026-04-16 08:05:02.535924 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-04-16 08:05:02.535938 | orchestrator | 2026-04-16 08:05:02.535951 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-04-16 08:05:02.535964 | orchestrator | Thursday 16 April 2026 08:04:30 +0000 (0:00:03.200) 0:18:37.150 ******** 2026-04-16 08:05:02.535976 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:05:02.535989 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:05:02.536003 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:05:02.536043 | orchestrator | 2026-04-16 08:05:02.536058 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-16 08:05:02.536071 | orchestrator | 2026-04-16 08:05:02.536084 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-16 08:05:02.536097 | orchestrator | Thursday 16 April 2026 08:04:32 +0000 (0:00:01.775) 0:18:38.925 ******** 2026-04-16 08:05:02.536110 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536123 | orchestrator | 2026-04-16 08:05:02.536136 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:05:02.536149 | orchestrator | Thursday 16 April 2026 08:04:33 +0000 (0:00:01.132) 0:18:40.058 ******** 2026-04-16 08:05:02.536162 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536174 | orchestrator | 2026-04-16 08:05:02.536187 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:05:02.536201 | orchestrator | Thursday 16 April 2026 08:04:34 +0000 (0:00:01.125) 0:18:41.183 ******** 2026-04-16 08:05:02.536213 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536226 | orchestrator | 2026-04-16 08:05:02.536239 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:05:02.536253 | orchestrator | Thursday 16 April 2026 08:04:35 +0000 (0:00:01.124) 0:18:42.308 ******** 2026-04-16 08:05:02.536267 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536279 | orchestrator | 2026-04-16 08:05:02.536290 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:05:02.536301 | orchestrator | Thursday 16 April 2026 08:04:36 +0000 (0:00:01.125) 0:18:43.434 ******** 2026-04-16 08:05:02.536312 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536323 | orchestrator | 2026-04-16 08:05:02.536333 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:05:02.536344 | orchestrator | Thursday 16 April 2026 08:04:37 +0000 (0:00:01.117) 0:18:44.551 ******** 2026-04-16 08:05:02.536355 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536366 | orchestrator | 2026-04-16 08:05:02.536377 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:05:02.536388 | orchestrator | Thursday 16 April 2026 08:04:38 +0000 (0:00:01.120) 0:18:45.672 ******** 2026-04-16 08:05:02.536398 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536409 | orchestrator | 2026-04-16 08:05:02.536420 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:05:02.536431 | orchestrator | Thursday 16 April 2026 08:04:40 +0000 (0:00:01.088) 0:18:46.761 ******** 2026-04-16 08:05:02.536456 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536468 | orchestrator | 2026-04-16 08:05:02.536479 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:05:02.536490 | orchestrator | Thursday 16 April 2026 08:04:41 +0000 (0:00:01.116) 0:18:47.878 ******** 2026-04-16 08:05:02.536519 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536530 | orchestrator | 2026-04-16 08:05:02.536541 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:05:02.536552 | orchestrator | Thursday 16 April 2026 08:04:42 +0000 (0:00:01.131) 0:18:49.009 ******** 2026-04-16 08:05:02.536563 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536574 | orchestrator | 2026-04-16 08:05:02.536585 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:05:02.536596 | orchestrator | Thursday 16 April 2026 08:04:43 +0000 (0:00:01.138) 0:18:50.148 ******** 2026-04-16 08:05:02.536607 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536617 | orchestrator | 2026-04-16 08:05:02.536628 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:05:02.536640 | orchestrator | Thursday 16 April 2026 08:04:44 +0000 (0:00:01.128) 0:18:51.276 ******** 2026-04-16 08:05:02.536651 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536662 | orchestrator | 2026-04-16 08:05:02.536673 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:05:02.536695 | orchestrator | Thursday 16 April 2026 08:04:45 +0000 (0:00:01.102) 0:18:52.379 ******** 2026-04-16 08:05:02.536706 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536717 | orchestrator | 2026-04-16 08:05:02.536728 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:05:02.536739 | orchestrator | Thursday 16 April 2026 08:04:46 +0000 (0:00:01.127) 0:18:53.507 ******** 2026-04-16 08:05:02.536750 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536760 | orchestrator | 2026-04-16 08:05:02.536771 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:05:02.536782 | orchestrator | Thursday 16 April 2026 08:04:47 +0000 (0:00:01.129) 0:18:54.637 ******** 2026-04-16 08:05:02.536815 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536826 | orchestrator | 2026-04-16 08:05:02.536837 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:05:02.536848 | orchestrator | Thursday 16 April 2026 08:04:49 +0000 (0:00:01.197) 0:18:55.835 ******** 2026-04-16 08:05:02.536859 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536870 | orchestrator | 2026-04-16 08:05:02.536880 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:05:02.536892 | orchestrator | Thursday 16 April 2026 08:04:50 +0000 (0:00:01.104) 0:18:56.939 ******** 2026-04-16 08:05:02.536902 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536913 | orchestrator | 2026-04-16 08:05:02.536924 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:05:02.536935 | orchestrator | Thursday 16 April 2026 08:04:51 +0000 (0:00:01.123) 0:18:58.063 ******** 2026-04-16 08:05:02.536946 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.536957 | orchestrator | 2026-04-16 08:05:02.536968 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:05:02.536983 | orchestrator | Thursday 16 April 2026 08:04:52 +0000 (0:00:01.105) 0:18:59.168 ******** 2026-04-16 08:05:02.537002 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537020 | orchestrator | 2026-04-16 08:05:02.537039 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:05:02.537057 | orchestrator | Thursday 16 April 2026 08:04:53 +0000 (0:00:01.102) 0:19:00.271 ******** 2026-04-16 08:05:02.537075 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537093 | orchestrator | 2026-04-16 08:05:02.537110 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:05:02.537128 | orchestrator | Thursday 16 April 2026 08:04:54 +0000 (0:00:01.095) 0:19:01.366 ******** 2026-04-16 08:05:02.537146 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537165 | orchestrator | 2026-04-16 08:05:02.537183 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:05:02.537202 | orchestrator | Thursday 16 April 2026 08:04:55 +0000 (0:00:01.120) 0:19:02.487 ******** 2026-04-16 08:05:02.537221 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537234 | orchestrator | 2026-04-16 08:05:02.537245 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:05:02.537256 | orchestrator | Thursday 16 April 2026 08:04:56 +0000 (0:00:01.116) 0:19:03.603 ******** 2026-04-16 08:05:02.537267 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537278 | orchestrator | 2026-04-16 08:05:02.537288 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:05:02.537299 | orchestrator | Thursday 16 April 2026 08:04:57 +0000 (0:00:01.038) 0:19:04.642 ******** 2026-04-16 08:05:02.537310 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537321 | orchestrator | 2026-04-16 08:05:02.537332 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:05:02.537342 | orchestrator | Thursday 16 April 2026 08:04:58 +0000 (0:00:00.889) 0:19:05.532 ******** 2026-04-16 08:05:02.537353 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537364 | orchestrator | 2026-04-16 08:05:02.537375 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:05:02.537396 | orchestrator | Thursday 16 April 2026 08:04:59 +0000 (0:00:00.913) 0:19:06.446 ******** 2026-04-16 08:05:02.537407 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537417 | orchestrator | 2026-04-16 08:05:02.537428 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:05:02.537439 | orchestrator | Thursday 16 April 2026 08:05:00 +0000 (0:00:00.912) 0:19:07.358 ******** 2026-04-16 08:05:02.537450 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537461 | orchestrator | 2026-04-16 08:05:02.537472 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:05:02.537483 | orchestrator | Thursday 16 April 2026 08:05:01 +0000 (0:00:00.910) 0:19:08.269 ******** 2026-04-16 08:05:02.537494 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537504 | orchestrator | 2026-04-16 08:05:02.537523 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:05:02.537534 | orchestrator | Thursday 16 April 2026 08:05:02 +0000 (0:00:00.882) 0:19:09.151 ******** 2026-04-16 08:05:02.537545 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:02.537556 | orchestrator | 2026-04-16 08:05:02.537577 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:05:44.789673 | orchestrator | Thursday 16 April 2026 08:05:03 +0000 (0:00:01.110) 0:19:10.261 ******** 2026-04-16 08:05:44.789797 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.789847 | orchestrator | 2026-04-16 08:05:44.789862 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:05:44.789874 | orchestrator | Thursday 16 April 2026 08:05:04 +0000 (0:00:01.079) 0:19:11.341 ******** 2026-04-16 08:05:44.789886 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.789897 | orchestrator | 2026-04-16 08:05:44.789908 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:05:44.789920 | orchestrator | Thursday 16 April 2026 08:05:05 +0000 (0:00:01.101) 0:19:12.443 ******** 2026-04-16 08:05:44.789931 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.789942 | orchestrator | 2026-04-16 08:05:44.789953 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:05:44.789965 | orchestrator | Thursday 16 April 2026 08:05:06 +0000 (0:00:01.106) 0:19:13.549 ******** 2026-04-16 08:05:44.789976 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.789987 | orchestrator | 2026-04-16 08:05:44.789998 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:05:44.790009 | orchestrator | Thursday 16 April 2026 08:05:07 +0000 (0:00:01.108) 0:19:14.658 ******** 2026-04-16 08:05:44.790082 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790094 | orchestrator | 2026-04-16 08:05:44.790105 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:05:44.790116 | orchestrator | Thursday 16 April 2026 08:05:09 +0000 (0:00:01.102) 0:19:15.761 ******** 2026-04-16 08:05:44.790127 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790138 | orchestrator | 2026-04-16 08:05:44.790149 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:05:44.790160 | orchestrator | Thursday 16 April 2026 08:05:10 +0000 (0:00:01.100) 0:19:16.861 ******** 2026-04-16 08:05:44.790171 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790184 | orchestrator | 2026-04-16 08:05:44.790197 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:05:44.790209 | orchestrator | Thursday 16 April 2026 08:05:11 +0000 (0:00:01.155) 0:19:18.016 ******** 2026-04-16 08:05:44.790222 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790235 | orchestrator | 2026-04-16 08:05:44.790247 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:05:44.790260 | orchestrator | Thursday 16 April 2026 08:05:12 +0000 (0:00:01.098) 0:19:19.115 ******** 2026-04-16 08:05:44.790272 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790286 | orchestrator | 2026-04-16 08:05:44.790300 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:05:44.790339 | orchestrator | Thursday 16 April 2026 08:05:13 +0000 (0:00:01.113) 0:19:20.228 ******** 2026-04-16 08:05:44.790352 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790369 | orchestrator | 2026-04-16 08:05:44.790389 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:05:44.790412 | orchestrator | Thursday 16 April 2026 08:05:14 +0000 (0:00:01.110) 0:19:21.339 ******** 2026-04-16 08:05:44.790432 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790452 | orchestrator | 2026-04-16 08:05:44.790471 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:05:44.790490 | orchestrator | Thursday 16 April 2026 08:05:15 +0000 (0:00:01.119) 0:19:22.459 ******** 2026-04-16 08:05:44.790510 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790530 | orchestrator | 2026-04-16 08:05:44.790549 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:05:44.790569 | orchestrator | Thursday 16 April 2026 08:05:16 +0000 (0:00:01.119) 0:19:23.578 ******** 2026-04-16 08:05:44.790590 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790610 | orchestrator | 2026-04-16 08:05:44.790629 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:05:44.790650 | orchestrator | Thursday 16 April 2026 08:05:17 +0000 (0:00:01.107) 0:19:24.685 ******** 2026-04-16 08:05:44.790671 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790690 | orchestrator | 2026-04-16 08:05:44.790709 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:05:44.790728 | orchestrator | Thursday 16 April 2026 08:05:19 +0000 (0:00:01.139) 0:19:25.825 ******** 2026-04-16 08:05:44.790746 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790765 | orchestrator | 2026-04-16 08:05:44.790786 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:05:44.790829 | orchestrator | Thursday 16 April 2026 08:05:20 +0000 (0:00:01.096) 0:19:26.922 ******** 2026-04-16 08:05:44.790849 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790867 | orchestrator | 2026-04-16 08:05:44.790886 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:05:44.790905 | orchestrator | Thursday 16 April 2026 08:05:21 +0000 (0:00:01.156) 0:19:28.078 ******** 2026-04-16 08:05:44.790924 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.790944 | orchestrator | 2026-04-16 08:05:44.790963 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:05:44.790983 | orchestrator | Thursday 16 April 2026 08:05:22 +0000 (0:00:01.212) 0:19:29.291 ******** 2026-04-16 08:05:44.791003 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791021 | orchestrator | 2026-04-16 08:05:44.791041 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:05:44.791060 | orchestrator | Thursday 16 April 2026 08:05:23 +0000 (0:00:01.138) 0:19:30.430 ******** 2026-04-16 08:05:44.791080 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791098 | orchestrator | 2026-04-16 08:05:44.791135 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:05:44.791154 | orchestrator | Thursday 16 April 2026 08:05:24 +0000 (0:00:01.239) 0:19:31.669 ******** 2026-04-16 08:05:44.791171 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791188 | orchestrator | 2026-04-16 08:05:44.791236 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:05:44.791256 | orchestrator | Thursday 16 April 2026 08:05:26 +0000 (0:00:01.104) 0:19:32.774 ******** 2026-04-16 08:05:44.791273 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791291 | orchestrator | 2026-04-16 08:05:44.791308 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:05:44.791328 | orchestrator | Thursday 16 April 2026 08:05:27 +0000 (0:00:01.125) 0:19:33.899 ******** 2026-04-16 08:05:44.791367 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791386 | orchestrator | 2026-04-16 08:05:44.791404 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:05:44.791421 | orchestrator | Thursday 16 April 2026 08:05:28 +0000 (0:00:01.109) 0:19:35.008 ******** 2026-04-16 08:05:44.791439 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791457 | orchestrator | 2026-04-16 08:05:44.791474 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:05:44.791492 | orchestrator | Thursday 16 April 2026 08:05:29 +0000 (0:00:01.120) 0:19:36.129 ******** 2026-04-16 08:05:44.791510 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791528 | orchestrator | 2026-04-16 08:05:44.791546 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:05:44.791564 | orchestrator | Thursday 16 April 2026 08:05:30 +0000 (0:00:01.092) 0:19:37.221 ******** 2026-04-16 08:05:44.791582 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791599 | orchestrator | 2026-04-16 08:05:44.791617 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:05:44.791635 | orchestrator | Thursday 16 April 2026 08:05:31 +0000 (0:00:01.134) 0:19:38.356 ******** 2026-04-16 08:05:44.791654 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:05:44.791673 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:05:44.791691 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 08:05:44.791709 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791728 | orchestrator | 2026-04-16 08:05:44.791746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:05:44.791765 | orchestrator | Thursday 16 April 2026 08:05:33 +0000 (0:00:01.716) 0:19:40.072 ******** 2026-04-16 08:05:44.791783 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:05:44.791802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:05:44.791878 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 08:05:44.791898 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.791915 | orchestrator | 2026-04-16 08:05:44.791932 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:05:44.791950 | orchestrator | Thursday 16 April 2026 08:05:34 +0000 (0:00:01.669) 0:19:41.742 ******** 2026-04-16 08:05:44.791968 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:05:44.791985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:05:44.792003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 08:05:44.792020 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792037 | orchestrator | 2026-04-16 08:05:44.792056 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:05:44.792073 | orchestrator | Thursday 16 April 2026 08:05:36 +0000 (0:00:01.442) 0:19:43.184 ******** 2026-04-16 08:05:44.792091 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792108 | orchestrator | 2026-04-16 08:05:44.792127 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:05:44.792145 | orchestrator | Thursday 16 April 2026 08:05:37 +0000 (0:00:01.116) 0:19:44.301 ******** 2026-04-16 08:05:44.792165 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-16 08:05:44.792184 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792202 | orchestrator | 2026-04-16 08:05:44.792220 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:05:44.792240 | orchestrator | Thursday 16 April 2026 08:05:38 +0000 (0:00:01.233) 0:19:45.535 ******** 2026-04-16 08:05:44.792258 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792276 | orchestrator | 2026-04-16 08:05:44.792295 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:05:44.792313 | orchestrator | Thursday 16 April 2026 08:05:39 +0000 (0:00:01.093) 0:19:46.629 ******** 2026-04-16 08:05:44.792348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 08:05:44.792366 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 08:05:44.792386 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 08:05:44.792405 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792423 | orchestrator | 2026-04-16 08:05:44.792442 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 08:05:44.792461 | orchestrator | Thursday 16 April 2026 08:05:41 +0000 (0:00:01.400) 0:19:48.029 ******** 2026-04-16 08:05:44.792478 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792495 | orchestrator | 2026-04-16 08:05:44.792514 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 08:05:44.792531 | orchestrator | Thursday 16 April 2026 08:05:42 +0000 (0:00:01.119) 0:19:49.149 ******** 2026-04-16 08:05:44.792549 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792567 | orchestrator | 2026-04-16 08:05:44.792585 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 08:05:44.792603 | orchestrator | Thursday 16 April 2026 08:05:43 +0000 (0:00:01.136) 0:19:50.285 ******** 2026-04-16 08:05:44.792622 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792640 | orchestrator | 2026-04-16 08:05:44.792673 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 08:05:44.792692 | orchestrator | Thursday 16 April 2026 08:05:44 +0000 (0:00:01.109) 0:19:51.395 ******** 2026-04-16 08:05:44.792711 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:05:44.792728 | orchestrator | 2026-04-16 08:05:44.792768 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-16 08:06:16.746538 | orchestrator | 2026-04-16 08:06:16.746663 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-16 08:06:16.746681 | orchestrator | Thursday 16 April 2026 08:05:45 +0000 (0:00:00.976) 0:19:52.371 ******** 2026-04-16 08:06:16.746693 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746705 | orchestrator | 2026-04-16 08:06:16.746717 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:06:16.746728 | orchestrator | Thursday 16 April 2026 08:05:46 +0000 (0:00:00.790) 0:19:53.162 ******** 2026-04-16 08:06:16.746739 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746750 | orchestrator | 2026-04-16 08:06:16.746761 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:06:16.746772 | orchestrator | Thursday 16 April 2026 08:05:47 +0000 (0:00:00.758) 0:19:53.920 ******** 2026-04-16 08:06:16.746783 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746793 | orchestrator | 2026-04-16 08:06:16.746804 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:06:16.746843 | orchestrator | Thursday 16 April 2026 08:05:47 +0000 (0:00:00.772) 0:19:54.693 ******** 2026-04-16 08:06:16.746855 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746866 | orchestrator | 2026-04-16 08:06:16.746877 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:06:16.746888 | orchestrator | Thursday 16 April 2026 08:05:48 +0000 (0:00:00.799) 0:19:55.493 ******** 2026-04-16 08:06:16.746899 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746910 | orchestrator | 2026-04-16 08:06:16.746920 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:06:16.746931 | orchestrator | Thursday 16 April 2026 08:05:49 +0000 (0:00:00.763) 0:19:56.256 ******** 2026-04-16 08:06:16.746943 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746954 | orchestrator | 2026-04-16 08:06:16.746965 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:06:16.746976 | orchestrator | Thursday 16 April 2026 08:05:50 +0000 (0:00:00.770) 0:19:57.026 ******** 2026-04-16 08:06:16.746987 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.746998 | orchestrator | 2026-04-16 08:06:16.747009 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:06:16.747045 | orchestrator | Thursday 16 April 2026 08:05:51 +0000 (0:00:00.787) 0:19:57.814 ******** 2026-04-16 08:06:16.747058 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747071 | orchestrator | 2026-04-16 08:06:16.747084 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:06:16.747096 | orchestrator | Thursday 16 April 2026 08:05:51 +0000 (0:00:00.782) 0:19:58.597 ******** 2026-04-16 08:06:16.747109 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747121 | orchestrator | 2026-04-16 08:06:16.747132 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:06:16.747143 | orchestrator | Thursday 16 April 2026 08:05:52 +0000 (0:00:00.766) 0:19:59.363 ******** 2026-04-16 08:06:16.747154 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747164 | orchestrator | 2026-04-16 08:06:16.747175 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:06:16.747186 | orchestrator | Thursday 16 April 2026 08:05:53 +0000 (0:00:00.775) 0:20:00.139 ******** 2026-04-16 08:06:16.747197 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747207 | orchestrator | 2026-04-16 08:06:16.747218 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:06:16.747229 | orchestrator | Thursday 16 April 2026 08:05:54 +0000 (0:00:00.766) 0:20:00.906 ******** 2026-04-16 08:06:16.747240 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747250 | orchestrator | 2026-04-16 08:06:16.747261 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:06:16.747272 | orchestrator | Thursday 16 April 2026 08:05:54 +0000 (0:00:00.823) 0:20:01.729 ******** 2026-04-16 08:06:16.747283 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747293 | orchestrator | 2026-04-16 08:06:16.747304 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:06:16.747315 | orchestrator | Thursday 16 April 2026 08:05:55 +0000 (0:00:00.776) 0:20:02.506 ******** 2026-04-16 08:06:16.747326 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747336 | orchestrator | 2026-04-16 08:06:16.747347 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:06:16.747358 | orchestrator | Thursday 16 April 2026 08:05:56 +0000 (0:00:00.767) 0:20:03.273 ******** 2026-04-16 08:06:16.747369 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747379 | orchestrator | 2026-04-16 08:06:16.747390 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:06:16.747401 | orchestrator | Thursday 16 April 2026 08:05:57 +0000 (0:00:00.778) 0:20:04.052 ******** 2026-04-16 08:06:16.747412 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747422 | orchestrator | 2026-04-16 08:06:16.747433 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:06:16.747444 | orchestrator | Thursday 16 April 2026 08:05:58 +0000 (0:00:00.789) 0:20:04.841 ******** 2026-04-16 08:06:16.747455 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747465 | orchestrator | 2026-04-16 08:06:16.747476 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:06:16.747487 | orchestrator | Thursday 16 April 2026 08:05:58 +0000 (0:00:00.734) 0:20:05.575 ******** 2026-04-16 08:06:16.747498 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747509 | orchestrator | 2026-04-16 08:06:16.747519 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:06:16.747530 | orchestrator | Thursday 16 April 2026 08:05:59 +0000 (0:00:00.765) 0:20:06.341 ******** 2026-04-16 08:06:16.747542 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747560 | orchestrator | 2026-04-16 08:06:16.747598 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:06:16.747619 | orchestrator | Thursday 16 April 2026 08:06:00 +0000 (0:00:00.776) 0:20:07.117 ******** 2026-04-16 08:06:16.747639 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747660 | orchestrator | 2026-04-16 08:06:16.747695 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:06:16.747717 | orchestrator | Thursday 16 April 2026 08:06:01 +0000 (0:00:00.772) 0:20:07.890 ******** 2026-04-16 08:06:16.747728 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747739 | orchestrator | 2026-04-16 08:06:16.747750 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:06:16.747761 | orchestrator | Thursday 16 April 2026 08:06:01 +0000 (0:00:00.770) 0:20:08.661 ******** 2026-04-16 08:06:16.747771 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747782 | orchestrator | 2026-04-16 08:06:16.747793 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:06:16.747804 | orchestrator | Thursday 16 April 2026 08:06:02 +0000 (0:00:00.777) 0:20:09.438 ******** 2026-04-16 08:06:16.747840 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747853 | orchestrator | 2026-04-16 08:06:16.747863 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:06:16.747874 | orchestrator | Thursday 16 April 2026 08:06:03 +0000 (0:00:00.805) 0:20:10.244 ******** 2026-04-16 08:06:16.747885 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747896 | orchestrator | 2026-04-16 08:06:16.747906 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:06:16.747917 | orchestrator | Thursday 16 April 2026 08:06:04 +0000 (0:00:00.828) 0:20:11.073 ******** 2026-04-16 08:06:16.747928 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747939 | orchestrator | 2026-04-16 08:06:16.747950 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:06:16.747960 | orchestrator | Thursday 16 April 2026 08:06:05 +0000 (0:00:00.777) 0:20:11.850 ******** 2026-04-16 08:06:16.747971 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.747982 | orchestrator | 2026-04-16 08:06:16.747993 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:06:16.748004 | orchestrator | Thursday 16 April 2026 08:06:05 +0000 (0:00:00.812) 0:20:12.663 ******** 2026-04-16 08:06:16.748014 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748025 | orchestrator | 2026-04-16 08:06:16.748036 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:06:16.748047 | orchestrator | Thursday 16 April 2026 08:06:06 +0000 (0:00:00.788) 0:20:13.452 ******** 2026-04-16 08:06:16.748058 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748069 | orchestrator | 2026-04-16 08:06:16.748079 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:06:16.748090 | orchestrator | Thursday 16 April 2026 08:06:07 +0000 (0:00:00.782) 0:20:14.235 ******** 2026-04-16 08:06:16.748101 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748111 | orchestrator | 2026-04-16 08:06:16.748122 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:06:16.748133 | orchestrator | Thursday 16 April 2026 08:06:08 +0000 (0:00:00.798) 0:20:15.033 ******** 2026-04-16 08:06:16.748144 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748155 | orchestrator | 2026-04-16 08:06:16.748166 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:06:16.748176 | orchestrator | Thursday 16 April 2026 08:06:09 +0000 (0:00:00.745) 0:20:15.779 ******** 2026-04-16 08:06:16.748187 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748198 | orchestrator | 2026-04-16 08:06:16.748209 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:06:16.748219 | orchestrator | Thursday 16 April 2026 08:06:09 +0000 (0:00:00.800) 0:20:16.580 ******** 2026-04-16 08:06:16.748230 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748241 | orchestrator | 2026-04-16 08:06:16.748252 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:06:16.748263 | orchestrator | Thursday 16 April 2026 08:06:10 +0000 (0:00:00.773) 0:20:17.353 ******** 2026-04-16 08:06:16.748273 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748284 | orchestrator | 2026-04-16 08:06:16.748302 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:06:16.748313 | orchestrator | Thursday 16 April 2026 08:06:11 +0000 (0:00:00.754) 0:20:18.108 ******** 2026-04-16 08:06:16.748324 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748335 | orchestrator | 2026-04-16 08:06:16.748346 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:06:16.748357 | orchestrator | Thursday 16 April 2026 08:06:12 +0000 (0:00:00.766) 0:20:18.874 ******** 2026-04-16 08:06:16.748367 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748378 | orchestrator | 2026-04-16 08:06:16.748389 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:06:16.748400 | orchestrator | Thursday 16 April 2026 08:06:12 +0000 (0:00:00.767) 0:20:19.642 ******** 2026-04-16 08:06:16.748411 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748421 | orchestrator | 2026-04-16 08:06:16.748432 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:06:16.748443 | orchestrator | Thursday 16 April 2026 08:06:13 +0000 (0:00:00.766) 0:20:20.408 ******** 2026-04-16 08:06:16.748454 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748465 | orchestrator | 2026-04-16 08:06:16.748475 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:06:16.748486 | orchestrator | Thursday 16 April 2026 08:06:14 +0000 (0:00:00.801) 0:20:21.210 ******** 2026-04-16 08:06:16.748497 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748508 | orchestrator | 2026-04-16 08:06:16.748519 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:06:16.748529 | orchestrator | Thursday 16 April 2026 08:06:15 +0000 (0:00:00.749) 0:20:21.959 ******** 2026-04-16 08:06:16.748540 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748551 | orchestrator | 2026-04-16 08:06:16.748567 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:06:16.748580 | orchestrator | Thursday 16 April 2026 08:06:15 +0000 (0:00:00.768) 0:20:22.728 ******** 2026-04-16 08:06:16.748591 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:16.748602 | orchestrator | 2026-04-16 08:06:16.748613 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:06:16.748631 | orchestrator | Thursday 16 April 2026 08:06:16 +0000 (0:00:00.766) 0:20:23.494 ******** 2026-04-16 08:06:46.085656 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.085796 | orchestrator | 2026-04-16 08:06:46.085859 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:06:46.085883 | orchestrator | Thursday 16 April 2026 08:06:17 +0000 (0:00:00.782) 0:20:24.277 ******** 2026-04-16 08:06:46.085902 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.085922 | orchestrator | 2026-04-16 08:06:46.085943 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:06:46.085962 | orchestrator | Thursday 16 April 2026 08:06:18 +0000 (0:00:00.801) 0:20:25.079 ******** 2026-04-16 08:06:46.085980 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.085998 | orchestrator | 2026-04-16 08:06:46.086104 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:06:46.086134 | orchestrator | Thursday 16 April 2026 08:06:19 +0000 (0:00:00.771) 0:20:25.850 ******** 2026-04-16 08:06:46.086151 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086169 | orchestrator | 2026-04-16 08:06:46.086187 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:06:46.086206 | orchestrator | Thursday 16 April 2026 08:06:19 +0000 (0:00:00.758) 0:20:26.609 ******** 2026-04-16 08:06:46.086224 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086241 | orchestrator | 2026-04-16 08:06:46.086259 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:06:46.086278 | orchestrator | Thursday 16 April 2026 08:06:20 +0000 (0:00:00.753) 0:20:27.362 ******** 2026-04-16 08:06:46.086328 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086349 | orchestrator | 2026-04-16 08:06:46.086368 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:06:46.086386 | orchestrator | Thursday 16 April 2026 08:06:21 +0000 (0:00:00.875) 0:20:28.238 ******** 2026-04-16 08:06:46.086405 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086423 | orchestrator | 2026-04-16 08:06:46.086441 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:06:46.086459 | orchestrator | Thursday 16 April 2026 08:06:22 +0000 (0:00:00.751) 0:20:28.990 ******** 2026-04-16 08:06:46.086481 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086501 | orchestrator | 2026-04-16 08:06:46.086520 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:06:46.086539 | orchestrator | Thursday 16 April 2026 08:06:23 +0000 (0:00:00.844) 0:20:29.834 ******** 2026-04-16 08:06:46.086558 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086578 | orchestrator | 2026-04-16 08:06:46.086597 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:06:46.086614 | orchestrator | Thursday 16 April 2026 08:06:23 +0000 (0:00:00.744) 0:20:30.579 ******** 2026-04-16 08:06:46.086633 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086651 | orchestrator | 2026-04-16 08:06:46.086670 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:06:46.086691 | orchestrator | Thursday 16 April 2026 08:06:24 +0000 (0:00:00.776) 0:20:31.356 ******** 2026-04-16 08:06:46.086709 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086728 | orchestrator | 2026-04-16 08:06:46.086740 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:06:46.086751 | orchestrator | Thursday 16 April 2026 08:06:25 +0000 (0:00:00.757) 0:20:32.113 ******** 2026-04-16 08:06:46.086762 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086773 | orchestrator | 2026-04-16 08:06:46.086784 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:06:46.086795 | orchestrator | Thursday 16 April 2026 08:06:26 +0000 (0:00:00.743) 0:20:32.856 ******** 2026-04-16 08:06:46.086806 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086816 | orchestrator | 2026-04-16 08:06:46.086860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:06:46.086876 | orchestrator | Thursday 16 April 2026 08:06:26 +0000 (0:00:00.746) 0:20:33.603 ******** 2026-04-16 08:06:46.086888 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086898 | orchestrator | 2026-04-16 08:06:46.086909 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:06:46.086920 | orchestrator | Thursday 16 April 2026 08:06:27 +0000 (0:00:00.758) 0:20:34.361 ******** 2026-04-16 08:06:46.086931 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 08:06:46.086942 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 08:06:46.086953 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 08:06:46.086964 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.086975 | orchestrator | 2026-04-16 08:06:46.086985 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:06:46.086996 | orchestrator | Thursday 16 April 2026 08:06:28 +0000 (0:00:01.125) 0:20:35.486 ******** 2026-04-16 08:06:46.087007 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 08:06:46.087018 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 08:06:46.087028 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 08:06:46.087039 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087049 | orchestrator | 2026-04-16 08:06:46.087060 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:06:46.087071 | orchestrator | Thursday 16 April 2026 08:06:29 +0000 (0:00:01.002) 0:20:36.489 ******** 2026-04-16 08:06:46.087097 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 08:06:46.087119 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 08:06:46.087130 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 08:06:46.087141 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087152 | orchestrator | 2026-04-16 08:06:46.087162 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:06:46.087173 | orchestrator | Thursday 16 April 2026 08:06:30 +0000 (0:00:00.996) 0:20:37.486 ******** 2026-04-16 08:06:46.087210 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087221 | orchestrator | 2026-04-16 08:06:46.087232 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:06:46.087243 | orchestrator | Thursday 16 April 2026 08:06:31 +0000 (0:00:00.780) 0:20:38.267 ******** 2026-04-16 08:06:46.087255 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-16 08:06:46.087266 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087276 | orchestrator | 2026-04-16 08:06:46.087287 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:06:46.087298 | orchestrator | Thursday 16 April 2026 08:06:32 +0000 (0:00:00.888) 0:20:39.156 ******** 2026-04-16 08:06:46.087309 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087319 | orchestrator | 2026-04-16 08:06:46.087330 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:06:46.087341 | orchestrator | Thursday 16 April 2026 08:06:33 +0000 (0:00:00.862) 0:20:40.018 ******** 2026-04-16 08:06:46.087352 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 08:06:46.087362 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 08:06:46.087373 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 08:06:46.087384 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087394 | orchestrator | 2026-04-16 08:06:46.087405 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 08:06:46.087416 | orchestrator | Thursday 16 April 2026 08:06:34 +0000 (0:00:01.040) 0:20:41.059 ******** 2026-04-16 08:06:46.087426 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087437 | orchestrator | 2026-04-16 08:06:46.087448 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 08:06:46.087459 | orchestrator | Thursday 16 April 2026 08:06:35 +0000 (0:00:00.776) 0:20:41.835 ******** 2026-04-16 08:06:46.087469 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087480 | orchestrator | 2026-04-16 08:06:46.087490 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 08:06:46.087501 | orchestrator | Thursday 16 April 2026 08:06:35 +0000 (0:00:00.790) 0:20:42.626 ******** 2026-04-16 08:06:46.087512 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087523 | orchestrator | 2026-04-16 08:06:46.087533 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 08:06:46.087544 | orchestrator | Thursday 16 April 2026 08:06:36 +0000 (0:00:00.775) 0:20:43.402 ******** 2026-04-16 08:06:46.087554 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:06:46.087565 | orchestrator | 2026-04-16 08:06:46.087576 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-04-16 08:06:46.087587 | orchestrator | 2026-04-16 08:06:46.087598 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-16 08:06:46.087609 | orchestrator | Thursday 16 April 2026 08:06:37 +0000 (0:00:00.960) 0:20:44.362 ******** 2026-04-16 08:06:46.087619 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.087630 | orchestrator | 2026-04-16 08:06:46.087641 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:06:46.087652 | orchestrator | Thursday 16 April 2026 08:06:38 +0000 (0:00:00.786) 0:20:45.149 ******** 2026-04-16 08:06:46.087671 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.087688 | orchestrator | 2026-04-16 08:06:46.087717 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:06:46.087763 | orchestrator | Thursday 16 April 2026 08:06:39 +0000 (0:00:00.760) 0:20:45.909 ******** 2026-04-16 08:06:46.087782 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.087798 | orchestrator | 2026-04-16 08:06:46.087815 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:06:46.087910 | orchestrator | Thursday 16 April 2026 08:06:39 +0000 (0:00:00.751) 0:20:46.661 ******** 2026-04-16 08:06:46.087930 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.087941 | orchestrator | 2026-04-16 08:06:46.087952 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:06:46.087963 | orchestrator | Thursday 16 April 2026 08:06:40 +0000 (0:00:00.808) 0:20:47.470 ******** 2026-04-16 08:06:46.087974 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.087984 | orchestrator | 2026-04-16 08:06:46.087995 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:06:46.088006 | orchestrator | Thursday 16 April 2026 08:06:41 +0000 (0:00:00.749) 0:20:48.219 ******** 2026-04-16 08:06:46.088017 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.088027 | orchestrator | 2026-04-16 08:06:46.088038 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:06:46.088049 | orchestrator | Thursday 16 April 2026 08:06:42 +0000 (0:00:00.765) 0:20:48.985 ******** 2026-04-16 08:06:46.088059 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.088070 | orchestrator | 2026-04-16 08:06:46.088081 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:06:46.088091 | orchestrator | Thursday 16 April 2026 08:06:42 +0000 (0:00:00.766) 0:20:49.751 ******** 2026-04-16 08:06:46.088102 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.088113 | orchestrator | 2026-04-16 08:06:46.088124 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:06:46.088135 | orchestrator | Thursday 16 April 2026 08:06:43 +0000 (0:00:00.784) 0:20:50.535 ******** 2026-04-16 08:06:46.088145 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.088156 | orchestrator | 2026-04-16 08:06:46.088167 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:06:46.088177 | orchestrator | Thursday 16 April 2026 08:06:44 +0000 (0:00:00.772) 0:20:51.308 ******** 2026-04-16 08:06:46.088188 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.088199 | orchestrator | 2026-04-16 08:06:46.088209 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:06:46.088259 | orchestrator | Thursday 16 April 2026 08:06:45 +0000 (0:00:00.767) 0:20:52.076 ******** 2026-04-16 08:06:46.088271 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:06:46.088282 | orchestrator | 2026-04-16 08:06:46.088293 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:06:46.088317 | orchestrator | Thursday 16 April 2026 08:06:46 +0000 (0:00:00.755) 0:20:52.831 ******** 2026-04-16 08:07:17.148086 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148229 | orchestrator | 2026-04-16 08:07:17.148253 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:07:17.148274 | orchestrator | Thursday 16 April 2026 08:06:46 +0000 (0:00:00.761) 0:20:53.593 ******** 2026-04-16 08:07:17.148292 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148310 | orchestrator | 2026-04-16 08:07:17.148329 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:07:17.148348 | orchestrator | Thursday 16 April 2026 08:06:47 +0000 (0:00:00.784) 0:20:54.378 ******** 2026-04-16 08:07:17.148366 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148385 | orchestrator | 2026-04-16 08:07:17.148403 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:07:17.148422 | orchestrator | Thursday 16 April 2026 08:06:48 +0000 (0:00:00.803) 0:20:55.181 ******** 2026-04-16 08:07:17.148441 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148458 | orchestrator | 2026-04-16 08:07:17.148507 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:07:17.148526 | orchestrator | Thursday 16 April 2026 08:06:49 +0000 (0:00:00.773) 0:20:55.954 ******** 2026-04-16 08:07:17.148545 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148563 | orchestrator | 2026-04-16 08:07:17.148581 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:07:17.148601 | orchestrator | Thursday 16 April 2026 08:06:49 +0000 (0:00:00.757) 0:20:56.712 ******** 2026-04-16 08:07:17.148620 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148639 | orchestrator | 2026-04-16 08:07:17.148659 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:07:17.148720 | orchestrator | Thursday 16 April 2026 08:06:50 +0000 (0:00:00.759) 0:20:57.472 ******** 2026-04-16 08:07:17.148741 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148761 | orchestrator | 2026-04-16 08:07:17.148782 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:07:17.148803 | orchestrator | Thursday 16 April 2026 08:06:51 +0000 (0:00:00.842) 0:20:58.315 ******** 2026-04-16 08:07:17.148823 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148870 | orchestrator | 2026-04-16 08:07:17.148890 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:07:17.148911 | orchestrator | Thursday 16 April 2026 08:06:52 +0000 (0:00:00.760) 0:20:59.076 ******** 2026-04-16 08:07:17.148930 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.148947 | orchestrator | 2026-04-16 08:07:17.148964 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:07:17.148982 | orchestrator | Thursday 16 April 2026 08:06:53 +0000 (0:00:00.782) 0:20:59.858 ******** 2026-04-16 08:07:17.149000 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149018 | orchestrator | 2026-04-16 08:07:17.149037 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:07:17.149056 | orchestrator | Thursday 16 April 2026 08:06:53 +0000 (0:00:00.758) 0:21:00.617 ******** 2026-04-16 08:07:17.149076 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149095 | orchestrator | 2026-04-16 08:07:17.149114 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:07:17.149129 | orchestrator | Thursday 16 April 2026 08:06:54 +0000 (0:00:00.762) 0:21:01.380 ******** 2026-04-16 08:07:17.149140 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149151 | orchestrator | 2026-04-16 08:07:17.149161 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:07:17.149172 | orchestrator | Thursday 16 April 2026 08:06:55 +0000 (0:00:00.757) 0:21:02.137 ******** 2026-04-16 08:07:17.149183 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149194 | orchestrator | 2026-04-16 08:07:17.149204 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:07:17.149215 | orchestrator | Thursday 16 April 2026 08:06:56 +0000 (0:00:00.740) 0:21:02.878 ******** 2026-04-16 08:07:17.149226 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149236 | orchestrator | 2026-04-16 08:07:17.149247 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:07:17.149258 | orchestrator | Thursday 16 April 2026 08:06:56 +0000 (0:00:00.779) 0:21:03.657 ******** 2026-04-16 08:07:17.149269 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149279 | orchestrator | 2026-04-16 08:07:17.149290 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:07:17.149301 | orchestrator | Thursday 16 April 2026 08:06:57 +0000 (0:00:00.791) 0:21:04.449 ******** 2026-04-16 08:07:17.149311 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149322 | orchestrator | 2026-04-16 08:07:17.149332 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:07:17.149343 | orchestrator | Thursday 16 April 2026 08:06:58 +0000 (0:00:00.762) 0:21:05.212 ******** 2026-04-16 08:07:17.149354 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149379 | orchestrator | 2026-04-16 08:07:17.149389 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:07:17.149413 | orchestrator | Thursday 16 April 2026 08:06:59 +0000 (0:00:00.767) 0:21:05.979 ******** 2026-04-16 08:07:17.149424 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149435 | orchestrator | 2026-04-16 08:07:17.149446 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:07:17.149457 | orchestrator | Thursday 16 April 2026 08:06:59 +0000 (0:00:00.747) 0:21:06.727 ******** 2026-04-16 08:07:17.149467 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149478 | orchestrator | 2026-04-16 08:07:17.149505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:07:17.149516 | orchestrator | Thursday 16 April 2026 08:07:00 +0000 (0:00:00.779) 0:21:07.506 ******** 2026-04-16 08:07:17.149527 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149538 | orchestrator | 2026-04-16 08:07:17.149548 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:07:17.149559 | orchestrator | Thursday 16 April 2026 08:07:01 +0000 (0:00:00.766) 0:21:08.273 ******** 2026-04-16 08:07:17.149570 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149581 | orchestrator | 2026-04-16 08:07:17.149617 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:07:17.149630 | orchestrator | Thursday 16 April 2026 08:07:02 +0000 (0:00:00.766) 0:21:09.039 ******** 2026-04-16 08:07:17.149641 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149651 | orchestrator | 2026-04-16 08:07:17.149662 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:07:17.149673 | orchestrator | Thursday 16 April 2026 08:07:03 +0000 (0:00:00.785) 0:21:09.825 ******** 2026-04-16 08:07:17.149684 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149695 | orchestrator | 2026-04-16 08:07:17.149705 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:07:17.149716 | orchestrator | Thursday 16 April 2026 08:07:03 +0000 (0:00:00.758) 0:21:10.583 ******** 2026-04-16 08:07:17.149727 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149738 | orchestrator | 2026-04-16 08:07:17.149748 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:07:17.149759 | orchestrator | Thursday 16 April 2026 08:07:04 +0000 (0:00:00.763) 0:21:11.347 ******** 2026-04-16 08:07:17.149769 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149780 | orchestrator | 2026-04-16 08:07:17.149791 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:07:17.149802 | orchestrator | Thursday 16 April 2026 08:07:05 +0000 (0:00:00.803) 0:21:12.151 ******** 2026-04-16 08:07:17.149813 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149823 | orchestrator | 2026-04-16 08:07:17.149881 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:07:17.149903 | orchestrator | Thursday 16 April 2026 08:07:06 +0000 (0:00:00.790) 0:21:12.941 ******** 2026-04-16 08:07:17.149922 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149941 | orchestrator | 2026-04-16 08:07:17.149954 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:07:17.149965 | orchestrator | Thursday 16 April 2026 08:07:06 +0000 (0:00:00.757) 0:21:13.698 ******** 2026-04-16 08:07:17.149975 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.149986 | orchestrator | 2026-04-16 08:07:17.149997 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:07:17.150009 | orchestrator | Thursday 16 April 2026 08:07:07 +0000 (0:00:00.804) 0:21:14.503 ******** 2026-04-16 08:07:17.150081 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150094 | orchestrator | 2026-04-16 08:07:17.150105 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:07:17.150116 | orchestrator | Thursday 16 April 2026 08:07:08 +0000 (0:00:00.799) 0:21:15.302 ******** 2026-04-16 08:07:17.150137 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150148 | orchestrator | 2026-04-16 08:07:17.150159 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:07:17.150170 | orchestrator | Thursday 16 April 2026 08:07:09 +0000 (0:00:00.765) 0:21:16.067 ******** 2026-04-16 08:07:17.150180 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150191 | orchestrator | 2026-04-16 08:07:17.150202 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:07:17.150213 | orchestrator | Thursday 16 April 2026 08:07:10 +0000 (0:00:00.762) 0:21:16.830 ******** 2026-04-16 08:07:17.150224 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150234 | orchestrator | 2026-04-16 08:07:17.150245 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:07:17.150256 | orchestrator | Thursday 16 April 2026 08:07:10 +0000 (0:00:00.786) 0:21:17.616 ******** 2026-04-16 08:07:17.150266 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150277 | orchestrator | 2026-04-16 08:07:17.150288 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:07:17.150298 | orchestrator | Thursday 16 April 2026 08:07:11 +0000 (0:00:00.769) 0:21:18.385 ******** 2026-04-16 08:07:17.150309 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150320 | orchestrator | 2026-04-16 08:07:17.150330 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:07:17.150341 | orchestrator | Thursday 16 April 2026 08:07:12 +0000 (0:00:00.767) 0:21:19.152 ******** 2026-04-16 08:07:17.150352 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150363 | orchestrator | 2026-04-16 08:07:17.150373 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:07:17.150384 | orchestrator | Thursday 16 April 2026 08:07:13 +0000 (0:00:00.858) 0:21:20.011 ******** 2026-04-16 08:07:17.150394 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150405 | orchestrator | 2026-04-16 08:07:17.150416 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:07:17.150427 | orchestrator | Thursday 16 April 2026 08:07:14 +0000 (0:00:00.753) 0:21:20.764 ******** 2026-04-16 08:07:17.150438 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150448 | orchestrator | 2026-04-16 08:07:17.150459 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:07:17.150470 | orchestrator | Thursday 16 April 2026 08:07:14 +0000 (0:00:00.855) 0:21:21.620 ******** 2026-04-16 08:07:17.150481 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150491 | orchestrator | 2026-04-16 08:07:17.150502 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:07:17.150513 | orchestrator | Thursday 16 April 2026 08:07:15 +0000 (0:00:00.767) 0:21:22.387 ******** 2026-04-16 08:07:17.150523 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150534 | orchestrator | 2026-04-16 08:07:17.150551 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:07:17.150563 | orchestrator | Thursday 16 April 2026 08:07:16 +0000 (0:00:00.748) 0:21:23.136 ******** 2026-04-16 08:07:17.150574 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:07:17.150585 | orchestrator | 2026-04-16 08:07:17.150596 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:07:17.150616 | orchestrator | Thursday 16 April 2026 08:07:17 +0000 (0:00:00.758) 0:21:23.894 ******** 2026-04-16 08:08:05.482445 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.482557 | orchestrator | 2026-04-16 08:08:05.482572 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:08:05.482584 | orchestrator | Thursday 16 April 2026 08:07:17 +0000 (0:00:00.751) 0:21:24.646 ******** 2026-04-16 08:08:05.482594 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.482603 | orchestrator | 2026-04-16 08:08:05.482616 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:08:05.482660 | orchestrator | Thursday 16 April 2026 08:07:18 +0000 (0:00:00.760) 0:21:25.406 ******** 2026-04-16 08:08:05.482678 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.482693 | orchestrator | 2026-04-16 08:08:05.482709 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:08:05.482722 | orchestrator | Thursday 16 April 2026 08:07:19 +0000 (0:00:00.746) 0:21:26.153 ******** 2026-04-16 08:08:05.482736 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:08:05.482753 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:08:05.482770 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:08:05.482920 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.482934 | orchestrator | 2026-04-16 08:08:05.482944 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:08:05.482954 | orchestrator | Thursday 16 April 2026 08:07:20 +0000 (0:00:01.325) 0:21:27.479 ******** 2026-04-16 08:08:05.482968 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:08:05.482985 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:08:05.483001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:08:05.483027 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483044 | orchestrator | 2026-04-16 08:08:05.483061 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:08:05.483078 | orchestrator | Thursday 16 April 2026 08:07:22 +0000 (0:00:01.329) 0:21:28.809 ******** 2026-04-16 08:08:05.483097 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:08:05.483115 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:08:05.483133 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:08:05.483150 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483167 | orchestrator | 2026-04-16 08:08:05.483184 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:08:05.483201 | orchestrator | Thursday 16 April 2026 08:07:23 +0000 (0:00:01.024) 0:21:29.834 ******** 2026-04-16 08:08:05.483219 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483237 | orchestrator | 2026-04-16 08:08:05.483253 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:08:05.483271 | orchestrator | Thursday 16 April 2026 08:07:23 +0000 (0:00:00.770) 0:21:30.605 ******** 2026-04-16 08:08:05.483289 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-16 08:08:05.483306 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483323 | orchestrator | 2026-04-16 08:08:05.483340 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:08:05.483357 | orchestrator | Thursday 16 April 2026 08:07:24 +0000 (0:00:00.889) 0:21:31.495 ******** 2026-04-16 08:08:05.483373 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483391 | orchestrator | 2026-04-16 08:08:05.483408 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:08:05.483425 | orchestrator | Thursday 16 April 2026 08:07:25 +0000 (0:00:00.762) 0:21:32.257 ******** 2026-04-16 08:08:05.483443 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:08:05.483462 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:08:05.483479 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:08:05.483494 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483511 | orchestrator | 2026-04-16 08:08:05.483529 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 08:08:05.483545 | orchestrator | Thursday 16 April 2026 08:07:26 +0000 (0:00:01.045) 0:21:33.303 ******** 2026-04-16 08:08:05.483561 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483580 | orchestrator | 2026-04-16 08:08:05.483598 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 08:08:05.483667 | orchestrator | Thursday 16 April 2026 08:07:27 +0000 (0:00:00.759) 0:21:34.062 ******** 2026-04-16 08:08:05.483689 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483706 | orchestrator | 2026-04-16 08:08:05.483724 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 08:08:05.483744 | orchestrator | Thursday 16 April 2026 08:07:28 +0000 (0:00:00.772) 0:21:34.835 ******** 2026-04-16 08:08:05.483762 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483778 | orchestrator | 2026-04-16 08:08:05.483796 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 08:08:05.483814 | orchestrator | Thursday 16 April 2026 08:07:28 +0000 (0:00:00.739) 0:21:35.574 ******** 2026-04-16 08:08:05.483831 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:08:05.483873 | orchestrator | 2026-04-16 08:08:05.483891 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-16 08:08:05.483908 | orchestrator | 2026-04-16 08:08:05.483924 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-16 08:08:05.483956 | orchestrator | Thursday 16 April 2026 08:07:30 +0000 (0:00:01.405) 0:21:36.979 ******** 2026-04-16 08:08:05.483966 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:08:05.483976 | orchestrator | 2026-04-16 08:08:05.483985 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-16 08:08:05.483995 | orchestrator | Thursday 16 April 2026 08:07:43 +0000 (0:00:12.950) 0:21:49.930 ******** 2026-04-16 08:08:05.484004 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:08:05.484014 | orchestrator | 2026-04-16 08:08:05.484023 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:08:05.484057 | orchestrator | Thursday 16 April 2026 08:07:45 +0000 (0:00:02.599) 0:21:52.529 ******** 2026-04-16 08:08:05.484067 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-16 08:08:05.484077 | orchestrator | 2026-04-16 08:08:05.484086 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:08:05.484096 | orchestrator | Thursday 16 April 2026 08:07:46 +0000 (0:00:01.109) 0:21:53.638 ******** 2026-04-16 08:08:05.484106 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484115 | orchestrator | 2026-04-16 08:08:05.484125 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:08:05.484135 | orchestrator | Thursday 16 April 2026 08:07:48 +0000 (0:00:01.435) 0:21:55.074 ******** 2026-04-16 08:08:05.484144 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484154 | orchestrator | 2026-04-16 08:08:05.484163 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:08:05.484173 | orchestrator | Thursday 16 April 2026 08:07:49 +0000 (0:00:01.110) 0:21:56.185 ******** 2026-04-16 08:08:05.484182 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484192 | orchestrator | 2026-04-16 08:08:05.484202 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:08:05.484211 | orchestrator | Thursday 16 April 2026 08:07:50 +0000 (0:00:01.460) 0:21:57.645 ******** 2026-04-16 08:08:05.484221 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484230 | orchestrator | 2026-04-16 08:08:05.484240 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:08:05.484250 | orchestrator | Thursday 16 April 2026 08:07:52 +0000 (0:00:01.124) 0:21:58.769 ******** 2026-04-16 08:08:05.484260 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484269 | orchestrator | 2026-04-16 08:08:05.484279 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:08:05.484288 | orchestrator | Thursday 16 April 2026 08:07:53 +0000 (0:00:01.122) 0:21:59.891 ******** 2026-04-16 08:08:05.484298 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484307 | orchestrator | 2026-04-16 08:08:05.484317 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:08:05.484327 | orchestrator | Thursday 16 April 2026 08:07:54 +0000 (0:00:01.157) 0:22:01.049 ******** 2026-04-16 08:08:05.484337 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:05.484355 | orchestrator | 2026-04-16 08:08:05.484365 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:08:05.484375 | orchestrator | Thursday 16 April 2026 08:07:55 +0000 (0:00:01.150) 0:22:02.200 ******** 2026-04-16 08:08:05.484384 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484394 | orchestrator | 2026-04-16 08:08:05.484403 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:08:05.484413 | orchestrator | Thursday 16 April 2026 08:07:56 +0000 (0:00:01.114) 0:22:03.314 ******** 2026-04-16 08:08:05.484423 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:08:05.484432 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:08:05.484442 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:08:05.484451 | orchestrator | 2026-04-16 08:08:05.484461 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:08:05.484471 | orchestrator | Thursday 16 April 2026 08:07:58 +0000 (0:00:01.907) 0:22:05.222 ******** 2026-04-16 08:08:05.484480 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:05.484490 | orchestrator | 2026-04-16 08:08:05.484499 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:08:05.484509 | orchestrator | Thursday 16 April 2026 08:07:59 +0000 (0:00:01.223) 0:22:06.446 ******** 2026-04-16 08:08:05.484518 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:08:05.484528 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:08:05.484538 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:08:05.484547 | orchestrator | 2026-04-16 08:08:05.484557 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:08:05.484567 | orchestrator | Thursday 16 April 2026 08:08:02 +0000 (0:00:02.824) 0:22:09.270 ******** 2026-04-16 08:08:05.484577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 08:08:05.484586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 08:08:05.484596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 08:08:05.484606 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:05.484615 | orchestrator | 2026-04-16 08:08:05.484625 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:08:05.484634 | orchestrator | Thursday 16 April 2026 08:08:03 +0000 (0:00:01.371) 0:22:10.642 ******** 2026-04-16 08:08:05.484646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:08:05.484664 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:08:05.484674 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:08:05.484684 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:05.484694 | orchestrator | 2026-04-16 08:08:05.484704 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:08:05.484720 | orchestrator | Thursday 16 April 2026 08:08:05 +0000 (0:00:01.586) 0:22:12.229 ******** 2026-04-16 08:08:25.259378 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:25.259533 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:25.259560 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:25.259572 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.259584 | orchestrator | 2026-04-16 08:08:25.259595 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:08:25.259606 | orchestrator | Thursday 16 April 2026 08:08:06 +0000 (0:00:01.207) 0:22:13.437 ******** 2026-04-16 08:08:25.259618 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:08:00.210919', 'end': '2026-04-16 08:08:00.259604', 'delta': '0:00:00.048685', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:08:25.259631 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:08:00.753404', 'end': '2026-04-16 08:08:00.814661', 'delta': '0:00:00.061257', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:08:25.259655 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:08:01.338949', 'end': '2026-04-16 08:08:01.386754', 'delta': '0:00:00.047805', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:08:25.259666 | orchestrator | 2026-04-16 08:08:25.259676 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:08:25.259686 | orchestrator | Thursday 16 April 2026 08:08:07 +0000 (0:00:01.199) 0:22:14.636 ******** 2026-04-16 08:08:25.259695 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:25.259706 | orchestrator | 2026-04-16 08:08:25.259715 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:08:25.259759 | orchestrator | Thursday 16 April 2026 08:08:09 +0000 (0:00:01.296) 0:22:15.933 ******** 2026-04-16 08:08:25.259776 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.259790 | orchestrator | 2026-04-16 08:08:25.259805 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:08:25.259820 | orchestrator | Thursday 16 April 2026 08:08:10 +0000 (0:00:01.265) 0:22:17.199 ******** 2026-04-16 08:08:25.259835 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:25.259878 | orchestrator | 2026-04-16 08:08:25.259895 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:08:25.259912 | orchestrator | Thursday 16 April 2026 08:08:11 +0000 (0:00:01.116) 0:22:18.315 ******** 2026-04-16 08:08:25.259930 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:25.259947 | orchestrator | 2026-04-16 08:08:25.259964 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:08:25.259977 | orchestrator | Thursday 16 April 2026 08:08:13 +0000 (0:00:02.001) 0:22:20.317 ******** 2026-04-16 08:08:25.259989 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:25.259999 | orchestrator | 2026-04-16 08:08:25.260011 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:08:25.260022 | orchestrator | Thursday 16 April 2026 08:08:14 +0000 (0:00:01.129) 0:22:21.446 ******** 2026-04-16 08:08:25.260033 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260044 | orchestrator | 2026-04-16 08:08:25.260054 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:08:25.260066 | orchestrator | Thursday 16 April 2026 08:08:15 +0000 (0:00:01.122) 0:22:22.568 ******** 2026-04-16 08:08:25.260077 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260089 | orchestrator | 2026-04-16 08:08:25.260099 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:08:25.260111 | orchestrator | Thursday 16 April 2026 08:08:17 +0000 (0:00:01.499) 0:22:24.068 ******** 2026-04-16 08:08:25.260122 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260134 | orchestrator | 2026-04-16 08:08:25.260145 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:08:25.260155 | orchestrator | Thursday 16 April 2026 08:08:18 +0000 (0:00:01.111) 0:22:25.180 ******** 2026-04-16 08:08:25.260167 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260178 | orchestrator | 2026-04-16 08:08:25.260189 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:08:25.260200 | orchestrator | Thursday 16 April 2026 08:08:19 +0000 (0:00:01.102) 0:22:26.282 ******** 2026-04-16 08:08:25.260212 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260223 | orchestrator | 2026-04-16 08:08:25.260234 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:08:25.260245 | orchestrator | Thursday 16 April 2026 08:08:20 +0000 (0:00:01.154) 0:22:27.437 ******** 2026-04-16 08:08:25.260255 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260266 | orchestrator | 2026-04-16 08:08:25.260275 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:08:25.260285 | orchestrator | Thursday 16 April 2026 08:08:21 +0000 (0:00:01.114) 0:22:28.551 ******** 2026-04-16 08:08:25.260294 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260304 | orchestrator | 2026-04-16 08:08:25.260314 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:08:25.260323 | orchestrator | Thursday 16 April 2026 08:08:22 +0000 (0:00:01.104) 0:22:29.656 ******** 2026-04-16 08:08:25.260333 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260342 | orchestrator | 2026-04-16 08:08:25.260352 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:08:25.260362 | orchestrator | Thursday 16 April 2026 08:08:24 +0000 (0:00:01.109) 0:22:30.765 ******** 2026-04-16 08:08:25.260371 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:25.260381 | orchestrator | 2026-04-16 08:08:25.260390 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:08:25.260414 | orchestrator | Thursday 16 April 2026 08:08:25 +0000 (0:00:01.130) 0:22:31.896 ******** 2026-04-16 08:08:25.260430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:25.260458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:25.260483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:25.260513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:08:26.470307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:26.470427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:26.470438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:26.470466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:08:26.470499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:26.470523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:08:26.470531 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:26.470540 | orchestrator | 2026-04-16 08:08:26.470548 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:08:26.470555 | orchestrator | Thursday 16 April 2026 08:08:26 +0000 (0:00:01.228) 0:22:33.125 ******** 2026-04-16 08:08:26.470564 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:26.470572 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:26.470586 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:26.470594 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:26.470662 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:26.470676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:43.617365 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:43.617476 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:43.617530 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:43.617560 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:08:43.617571 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:43.617582 | orchestrator | 2026-04-16 08:08:43.617592 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:08:43.617603 | orchestrator | Thursday 16 April 2026 08:08:27 +0000 (0:00:01.216) 0:22:34.341 ******** 2026-04-16 08:08:43.617613 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:43.617623 | orchestrator | 2026-04-16 08:08:43.617633 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:08:43.617642 | orchestrator | Thursday 16 April 2026 08:08:29 +0000 (0:00:01.515) 0:22:35.857 ******** 2026-04-16 08:08:43.617652 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:43.617661 | orchestrator | 2026-04-16 08:08:43.617689 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:08:43.617699 | orchestrator | Thursday 16 April 2026 08:08:30 +0000 (0:00:01.105) 0:22:36.963 ******** 2026-04-16 08:08:43.617719 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:08:43.617729 | orchestrator | 2026-04-16 08:08:43.617738 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:08:43.617755 | orchestrator | Thursday 16 April 2026 08:08:31 +0000 (0:00:01.474) 0:22:38.437 ******** 2026-04-16 08:08:43.617765 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:43.617774 | orchestrator | 2026-04-16 08:08:43.617784 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:08:43.617793 | orchestrator | Thursday 16 April 2026 08:08:32 +0000 (0:00:01.084) 0:22:39.522 ******** 2026-04-16 08:08:43.617802 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:43.617812 | orchestrator | 2026-04-16 08:08:43.617821 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:08:43.617830 | orchestrator | Thursday 16 April 2026 08:08:33 +0000 (0:00:01.201) 0:22:40.723 ******** 2026-04-16 08:08:43.617839 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:43.617849 | orchestrator | 2026-04-16 08:08:43.617945 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:08:43.617962 | orchestrator | Thursday 16 April 2026 08:08:35 +0000 (0:00:01.118) 0:22:41.841 ******** 2026-04-16 08:08:43.617975 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:08:43.617987 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 08:08:43.617997 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 08:08:43.618007 | orchestrator | 2026-04-16 08:08:43.618068 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:08:43.618081 | orchestrator | Thursday 16 April 2026 08:08:36 +0000 (0:00:01.707) 0:22:43.549 ******** 2026-04-16 08:08:43.618092 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 08:08:43.618102 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 08:08:43.618112 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 08:08:43.618122 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:43.618132 | orchestrator | 2026-04-16 08:08:43.618143 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:08:43.618153 | orchestrator | Thursday 16 April 2026 08:08:37 +0000 (0:00:01.144) 0:22:44.693 ******** 2026-04-16 08:08:43.618164 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:08:43.618183 | orchestrator | 2026-04-16 08:08:43.618194 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:08:43.618205 | orchestrator | Thursday 16 April 2026 08:08:39 +0000 (0:00:01.200) 0:22:45.894 ******** 2026-04-16 08:08:43.618215 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:08:43.618225 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:08:43.618236 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:08:43.618246 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:08:43.618256 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:08:43.618267 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:08:43.618277 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:08:43.618288 | orchestrator | 2026-04-16 08:08:43.618298 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:08:43.618314 | orchestrator | Thursday 16 April 2026 08:08:40 +0000 (0:00:01.770) 0:22:47.665 ******** 2026-04-16 08:08:43.618326 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:08:43.618337 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:08:43.618345 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:08:43.618354 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:08:43.618364 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:08:43.618380 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:08:43.618389 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:08:43.618397 | orchestrator | 2026-04-16 08:08:43.618406 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:08:43.618415 | orchestrator | Thursday 16 April 2026 08:08:43 +0000 (0:00:02.503) 0:22:50.169 ******** 2026-04-16 08:08:43.618424 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-04-16 08:08:43.618433 | orchestrator | 2026-04-16 08:08:43.618451 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:09:33.222319 | orchestrator | Thursday 16 April 2026 08:08:44 +0000 (0:00:01.119) 0:22:51.289 ******** 2026-04-16 08:09:33.222434 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-04-16 08:09:33.222451 | orchestrator | 2026-04-16 08:09:33.222462 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:09:33.222473 | orchestrator | Thursday 16 April 2026 08:08:45 +0000 (0:00:01.126) 0:22:52.416 ******** 2026-04-16 08:09:33.222483 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.222494 | orchestrator | 2026-04-16 08:09:33.222505 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:09:33.222515 | orchestrator | Thursday 16 April 2026 08:08:47 +0000 (0:00:01.498) 0:22:53.914 ******** 2026-04-16 08:09:33.222525 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.222535 | orchestrator | 2026-04-16 08:09:33.222545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:09:33.222555 | orchestrator | Thursday 16 April 2026 08:08:48 +0000 (0:00:01.123) 0:22:55.038 ******** 2026-04-16 08:09:33.222565 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.222575 | orchestrator | 2026-04-16 08:09:33.222585 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:09:33.222595 | orchestrator | Thursday 16 April 2026 08:08:49 +0000 (0:00:01.095) 0:22:56.134 ******** 2026-04-16 08:09:33.222605 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.222639 | orchestrator | 2026-04-16 08:09:33.222679 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:09:33.222697 | orchestrator | Thursday 16 April 2026 08:08:50 +0000 (0:00:01.134) 0:22:57.269 ******** 2026-04-16 08:09:33.222713 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.222729 | orchestrator | 2026-04-16 08:09:33.222743 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:09:33.222759 | orchestrator | Thursday 16 April 2026 08:08:52 +0000 (0:00:01.517) 0:22:58.786 ******** 2026-04-16 08:09:33.222775 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.222792 | orchestrator | 2026-04-16 08:09:33.222809 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:09:33.222820 | orchestrator | Thursday 16 April 2026 08:08:53 +0000 (0:00:01.106) 0:22:59.892 ******** 2026-04-16 08:09:33.222837 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.222853 | orchestrator | 2026-04-16 08:09:33.222898 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:09:33.222917 | orchestrator | Thursday 16 April 2026 08:08:54 +0000 (0:00:01.092) 0:23:00.984 ******** 2026-04-16 08:09:33.222934 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.222951 | orchestrator | 2026-04-16 08:09:33.222969 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:09:33.222987 | orchestrator | Thursday 16 April 2026 08:08:55 +0000 (0:00:01.627) 0:23:02.612 ******** 2026-04-16 08:09:33.223005 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.223022 | orchestrator | 2026-04-16 08:09:33.223041 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:09:33.223060 | orchestrator | Thursday 16 April 2026 08:08:57 +0000 (0:00:01.597) 0:23:04.210 ******** 2026-04-16 08:09:33.223108 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223121 | orchestrator | 2026-04-16 08:09:33.223133 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:09:33.223144 | orchestrator | Thursday 16 April 2026 08:08:58 +0000 (0:00:01.103) 0:23:05.313 ******** 2026-04-16 08:09:33.223155 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.223168 | orchestrator | 2026-04-16 08:09:33.223179 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:09:33.223191 | orchestrator | Thursday 16 April 2026 08:08:59 +0000 (0:00:01.118) 0:23:06.432 ******** 2026-04-16 08:09:33.223204 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223213 | orchestrator | 2026-04-16 08:09:33.223223 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:09:33.223233 | orchestrator | Thursday 16 April 2026 08:09:00 +0000 (0:00:01.131) 0:23:07.563 ******** 2026-04-16 08:09:33.223242 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223252 | orchestrator | 2026-04-16 08:09:33.223264 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:09:33.223280 | orchestrator | Thursday 16 April 2026 08:09:01 +0000 (0:00:01.109) 0:23:08.673 ******** 2026-04-16 08:09:33.223294 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223309 | orchestrator | 2026-04-16 08:09:33.223325 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:09:33.223335 | orchestrator | Thursday 16 April 2026 08:09:03 +0000 (0:00:01.096) 0:23:09.770 ******** 2026-04-16 08:09:33.223360 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223370 | orchestrator | 2026-04-16 08:09:33.223379 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:09:33.223389 | orchestrator | Thursday 16 April 2026 08:09:04 +0000 (0:00:01.103) 0:23:10.874 ******** 2026-04-16 08:09:33.223398 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223408 | orchestrator | 2026-04-16 08:09:33.223417 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:09:33.223427 | orchestrator | Thursday 16 April 2026 08:09:05 +0000 (0:00:01.098) 0:23:11.972 ******** 2026-04-16 08:09:33.223437 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.223446 | orchestrator | 2026-04-16 08:09:33.223455 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:09:33.223467 | orchestrator | Thursday 16 April 2026 08:09:06 +0000 (0:00:01.124) 0:23:13.097 ******** 2026-04-16 08:09:33.223484 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.223512 | orchestrator | 2026-04-16 08:09:33.223528 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:09:33.223543 | orchestrator | Thursday 16 April 2026 08:09:07 +0000 (0:00:01.130) 0:23:14.227 ******** 2026-04-16 08:09:33.223558 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.223574 | orchestrator | 2026-04-16 08:09:33.223587 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:09:33.223626 | orchestrator | Thursday 16 April 2026 08:09:08 +0000 (0:00:01.160) 0:23:15.388 ******** 2026-04-16 08:09:33.223642 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223661 | orchestrator | 2026-04-16 08:09:33.223677 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:09:33.223693 | orchestrator | Thursday 16 April 2026 08:09:09 +0000 (0:00:01.122) 0:23:16.510 ******** 2026-04-16 08:09:33.223710 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223721 | orchestrator | 2026-04-16 08:09:33.223730 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:09:33.223740 | orchestrator | Thursday 16 April 2026 08:09:10 +0000 (0:00:01.111) 0:23:17.622 ******** 2026-04-16 08:09:33.223749 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223759 | orchestrator | 2026-04-16 08:09:33.223768 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:09:33.223778 | orchestrator | Thursday 16 April 2026 08:09:11 +0000 (0:00:01.095) 0:23:18.718 ******** 2026-04-16 08:09:33.223801 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223811 | orchestrator | 2026-04-16 08:09:33.223820 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:09:33.223830 | orchestrator | Thursday 16 April 2026 08:09:13 +0000 (0:00:01.133) 0:23:19.851 ******** 2026-04-16 08:09:33.223839 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223849 | orchestrator | 2026-04-16 08:09:33.223858 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:09:33.223906 | orchestrator | Thursday 16 April 2026 08:09:14 +0000 (0:00:01.141) 0:23:20.993 ******** 2026-04-16 08:09:33.223930 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.223940 | orchestrator | 2026-04-16 08:09:33.223953 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:09:33.223973 | orchestrator | Thursday 16 April 2026 08:09:15 +0000 (0:00:01.126) 0:23:22.119 ******** 2026-04-16 08:09:33.223997 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224013 | orchestrator | 2026-04-16 08:09:33.224029 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:09:33.224046 | orchestrator | Thursday 16 April 2026 08:09:16 +0000 (0:00:01.112) 0:23:23.231 ******** 2026-04-16 08:09:33.224060 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224075 | orchestrator | 2026-04-16 08:09:33.224091 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:09:33.224108 | orchestrator | Thursday 16 April 2026 08:09:17 +0000 (0:00:01.078) 0:23:24.310 ******** 2026-04-16 08:09:33.224125 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224140 | orchestrator | 2026-04-16 08:09:33.224158 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:09:33.224175 | orchestrator | Thursday 16 April 2026 08:09:18 +0000 (0:00:01.129) 0:23:25.440 ******** 2026-04-16 08:09:33.224191 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224205 | orchestrator | 2026-04-16 08:09:33.224215 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:09:33.224225 | orchestrator | Thursday 16 April 2026 08:09:19 +0000 (0:00:01.103) 0:23:26.543 ******** 2026-04-16 08:09:33.224235 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224244 | orchestrator | 2026-04-16 08:09:33.224254 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:09:33.224263 | orchestrator | Thursday 16 April 2026 08:09:20 +0000 (0:00:01.098) 0:23:27.642 ******** 2026-04-16 08:09:33.224273 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224282 | orchestrator | 2026-04-16 08:09:33.224292 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:09:33.224302 | orchestrator | Thursday 16 April 2026 08:09:21 +0000 (0:00:01.084) 0:23:28.726 ******** 2026-04-16 08:09:33.224311 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.224321 | orchestrator | 2026-04-16 08:09:33.224331 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:09:33.224341 | orchestrator | Thursday 16 April 2026 08:09:23 +0000 (0:00:01.934) 0:23:30.661 ******** 2026-04-16 08:09:33.224350 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.224360 | orchestrator | 2026-04-16 08:09:33.224369 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:09:33.224379 | orchestrator | Thursday 16 April 2026 08:09:26 +0000 (0:00:02.465) 0:23:33.126 ******** 2026-04-16 08:09:33.224389 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-04-16 08:09:33.224399 | orchestrator | 2026-04-16 08:09:33.224409 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:09:33.224418 | orchestrator | Thursday 16 April 2026 08:09:27 +0000 (0:00:01.141) 0:23:34.268 ******** 2026-04-16 08:09:33.224437 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224447 | orchestrator | 2026-04-16 08:09:33.224457 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:09:33.224466 | orchestrator | Thursday 16 April 2026 08:09:28 +0000 (0:00:01.135) 0:23:35.403 ******** 2026-04-16 08:09:33.224485 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224495 | orchestrator | 2026-04-16 08:09:33.224504 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:09:33.224514 | orchestrator | Thursday 16 April 2026 08:09:29 +0000 (0:00:01.098) 0:23:36.502 ******** 2026-04-16 08:09:33.224524 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:09:33.224533 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:09:33.224543 | orchestrator | 2026-04-16 08:09:33.224552 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:09:33.224562 | orchestrator | Thursday 16 April 2026 08:09:31 +0000 (0:00:01.863) 0:23:38.366 ******** 2026-04-16 08:09:33.224572 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:09:33.224581 | orchestrator | 2026-04-16 08:09:33.224591 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:09:33.224600 | orchestrator | Thursday 16 April 2026 08:09:33 +0000 (0:00:01.457) 0:23:39.824 ******** 2026-04-16 08:09:33.224610 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:09:33.224620 | orchestrator | 2026-04-16 08:09:33.224640 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:10:19.157203 | orchestrator | Thursday 16 April 2026 08:09:34 +0000 (0:00:01.112) 0:23:40.937 ******** 2026-04-16 08:10:19.157330 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.157358 | orchestrator | 2026-04-16 08:10:19.157379 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:10:19.157399 | orchestrator | Thursday 16 April 2026 08:09:35 +0000 (0:00:01.110) 0:23:42.047 ******** 2026-04-16 08:10:19.157418 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.157438 | orchestrator | 2026-04-16 08:10:19.157459 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:10:19.157478 | orchestrator | Thursday 16 April 2026 08:09:36 +0000 (0:00:01.125) 0:23:43.173 ******** 2026-04-16 08:10:19.157498 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-04-16 08:10:19.157519 | orchestrator | 2026-04-16 08:10:19.157537 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:10:19.157554 | orchestrator | Thursday 16 April 2026 08:09:37 +0000 (0:00:01.119) 0:23:44.292 ******** 2026-04-16 08:10:19.157572 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:10:19.157592 | orchestrator | 2026-04-16 08:10:19.157613 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:10:19.157633 | orchestrator | Thursday 16 April 2026 08:09:39 +0000 (0:00:01.709) 0:23:46.002 ******** 2026-04-16 08:10:19.157652 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:10:19.157672 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:10:19.157690 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:10:19.157710 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.157730 | orchestrator | 2026-04-16 08:10:19.157749 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:10:19.157769 | orchestrator | Thursday 16 April 2026 08:09:40 +0000 (0:00:01.147) 0:23:47.150 ******** 2026-04-16 08:10:19.157790 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.157811 | orchestrator | 2026-04-16 08:10:19.157830 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:10:19.157850 | orchestrator | Thursday 16 April 2026 08:09:41 +0000 (0:00:01.118) 0:23:48.268 ******** 2026-04-16 08:10:19.157869 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.157917 | orchestrator | 2026-04-16 08:10:19.157939 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:10:19.157958 | orchestrator | Thursday 16 April 2026 08:09:42 +0000 (0:00:01.147) 0:23:49.415 ******** 2026-04-16 08:10:19.158011 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158113 | orchestrator | 2026-04-16 08:10:19.158135 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:10:19.158155 | orchestrator | Thursday 16 April 2026 08:09:43 +0000 (0:00:01.100) 0:23:50.516 ******** 2026-04-16 08:10:19.158174 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158192 | orchestrator | 2026-04-16 08:10:19.158210 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:10:19.158228 | orchestrator | Thursday 16 April 2026 08:09:44 +0000 (0:00:01.121) 0:23:51.638 ******** 2026-04-16 08:10:19.158246 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158266 | orchestrator | 2026-04-16 08:10:19.158284 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:10:19.158301 | orchestrator | Thursday 16 April 2026 08:09:46 +0000 (0:00:01.140) 0:23:52.779 ******** 2026-04-16 08:10:19.158319 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:10:19.158337 | orchestrator | 2026-04-16 08:10:19.158357 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:10:19.158377 | orchestrator | Thursday 16 April 2026 08:09:48 +0000 (0:00:02.584) 0:23:55.363 ******** 2026-04-16 08:10:19.158396 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:10:19.158417 | orchestrator | 2026-04-16 08:10:19.158438 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:10:19.158457 | orchestrator | Thursday 16 April 2026 08:09:49 +0000 (0:00:01.111) 0:23:56.475 ******** 2026-04-16 08:10:19.158477 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-04-16 08:10:19.158498 | orchestrator | 2026-04-16 08:10:19.158517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:10:19.158536 | orchestrator | Thursday 16 April 2026 08:09:50 +0000 (0:00:01.103) 0:23:57.578 ******** 2026-04-16 08:10:19.158576 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158596 | orchestrator | 2026-04-16 08:10:19.158612 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:10:19.158623 | orchestrator | Thursday 16 April 2026 08:09:51 +0000 (0:00:01.118) 0:23:58.697 ******** 2026-04-16 08:10:19.158634 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158645 | orchestrator | 2026-04-16 08:10:19.158656 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:10:19.158666 | orchestrator | Thursday 16 April 2026 08:09:53 +0000 (0:00:01.121) 0:23:59.818 ******** 2026-04-16 08:10:19.158677 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158688 | orchestrator | 2026-04-16 08:10:19.158699 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:10:19.158710 | orchestrator | Thursday 16 April 2026 08:09:54 +0000 (0:00:01.115) 0:24:00.934 ******** 2026-04-16 08:10:19.158720 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158731 | orchestrator | 2026-04-16 08:10:19.158741 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:10:19.158752 | orchestrator | Thursday 16 April 2026 08:09:55 +0000 (0:00:01.150) 0:24:02.084 ******** 2026-04-16 08:10:19.158763 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158773 | orchestrator | 2026-04-16 08:10:19.158784 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:10:19.158795 | orchestrator | Thursday 16 April 2026 08:09:56 +0000 (0:00:01.129) 0:24:03.214 ******** 2026-04-16 08:10:19.158830 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158842 | orchestrator | 2026-04-16 08:10:19.158853 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:10:19.158864 | orchestrator | Thursday 16 April 2026 08:09:57 +0000 (0:00:01.163) 0:24:04.377 ******** 2026-04-16 08:10:19.158875 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158915 | orchestrator | 2026-04-16 08:10:19.158928 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:10:19.158953 | orchestrator | Thursday 16 April 2026 08:09:58 +0000 (0:00:01.128) 0:24:05.506 ******** 2026-04-16 08:10:19.158965 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.158975 | orchestrator | 2026-04-16 08:10:19.158986 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:10:19.158997 | orchestrator | Thursday 16 April 2026 08:09:59 +0000 (0:00:01.143) 0:24:06.650 ******** 2026-04-16 08:10:19.159008 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:10:19.159018 | orchestrator | 2026-04-16 08:10:19.159029 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:10:19.159040 | orchestrator | Thursday 16 April 2026 08:10:01 +0000 (0:00:01.130) 0:24:07.780 ******** 2026-04-16 08:10:19.159051 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-04-16 08:10:19.159063 | orchestrator | 2026-04-16 08:10:19.159074 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:10:19.159084 | orchestrator | Thursday 16 April 2026 08:10:02 +0000 (0:00:01.191) 0:24:08.972 ******** 2026-04-16 08:10:19.159095 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-04-16 08:10:19.159107 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-04-16 08:10:19.159117 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-04-16 08:10:19.159128 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-04-16 08:10:19.159138 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-04-16 08:10:19.159149 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-04-16 08:10:19.159160 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-04-16 08:10:19.159171 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:10:19.159182 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:10:19.159193 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:10:19.159203 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:10:19.159214 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:10:19.159225 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:10:19.159236 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:10:19.159246 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-04-16 08:10:19.159257 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-04-16 08:10:19.159268 | orchestrator | 2026-04-16 08:10:19.159278 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:10:19.159289 | orchestrator | Thursday 16 April 2026 08:10:09 +0000 (0:00:06.939) 0:24:15.911 ******** 2026-04-16 08:10:19.159300 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159311 | orchestrator | 2026-04-16 08:10:19.159321 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:10:19.159332 | orchestrator | Thursday 16 April 2026 08:10:10 +0000 (0:00:01.108) 0:24:17.020 ******** 2026-04-16 08:10:19.159343 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159354 | orchestrator | 2026-04-16 08:10:19.159364 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:10:19.159375 | orchestrator | Thursday 16 April 2026 08:10:11 +0000 (0:00:01.094) 0:24:18.115 ******** 2026-04-16 08:10:19.159386 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159396 | orchestrator | 2026-04-16 08:10:19.159407 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:10:19.159418 | orchestrator | Thursday 16 April 2026 08:10:12 +0000 (0:00:01.120) 0:24:19.235 ******** 2026-04-16 08:10:19.159428 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159439 | orchestrator | 2026-04-16 08:10:19.159450 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:10:19.159460 | orchestrator | Thursday 16 April 2026 08:10:13 +0000 (0:00:01.086) 0:24:20.322 ******** 2026-04-16 08:10:19.159478 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159489 | orchestrator | 2026-04-16 08:10:19.159505 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:10:19.159517 | orchestrator | Thursday 16 April 2026 08:10:14 +0000 (0:00:01.100) 0:24:21.423 ******** 2026-04-16 08:10:19.159527 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159538 | orchestrator | 2026-04-16 08:10:19.159549 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:10:19.159560 | orchestrator | Thursday 16 April 2026 08:10:15 +0000 (0:00:01.112) 0:24:22.535 ******** 2026-04-16 08:10:19.159571 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159582 | orchestrator | 2026-04-16 08:10:19.159592 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:10:19.159603 | orchestrator | Thursday 16 April 2026 08:10:16 +0000 (0:00:01.095) 0:24:23.631 ******** 2026-04-16 08:10:19.159614 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159625 | orchestrator | 2026-04-16 08:10:19.159635 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:10:19.159646 | orchestrator | Thursday 16 April 2026 08:10:17 +0000 (0:00:01.107) 0:24:24.739 ******** 2026-04-16 08:10:19.159657 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:10:19.159668 | orchestrator | 2026-04-16 08:10:19.159678 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:10:19.159697 | orchestrator | Thursday 16 April 2026 08:10:19 +0000 (0:00:01.162) 0:24:25.902 ******** 2026-04-16 08:11:13.649052 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649175 | orchestrator | 2026-04-16 08:11:13.649192 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:11:13.649206 | orchestrator | Thursday 16 April 2026 08:10:20 +0000 (0:00:01.120) 0:24:27.023 ******** 2026-04-16 08:11:13.649217 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649228 | orchestrator | 2026-04-16 08:11:13.649240 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:11:13.649251 | orchestrator | Thursday 16 April 2026 08:10:21 +0000 (0:00:01.088) 0:24:28.111 ******** 2026-04-16 08:11:13.649262 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649273 | orchestrator | 2026-04-16 08:11:13.649284 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:11:13.649295 | orchestrator | Thursday 16 April 2026 08:10:22 +0000 (0:00:01.113) 0:24:29.225 ******** 2026-04-16 08:11:13.649306 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649317 | orchestrator | 2026-04-16 08:11:13.649328 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:11:13.649340 | orchestrator | Thursday 16 April 2026 08:10:23 +0000 (0:00:01.183) 0:24:30.409 ******** 2026-04-16 08:11:13.649351 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649362 | orchestrator | 2026-04-16 08:11:13.649373 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:11:13.649384 | orchestrator | Thursday 16 April 2026 08:10:24 +0000 (0:00:01.124) 0:24:31.534 ******** 2026-04-16 08:11:13.649395 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649406 | orchestrator | 2026-04-16 08:11:13.649417 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:11:13.649427 | orchestrator | Thursday 16 April 2026 08:10:25 +0000 (0:00:01.214) 0:24:32.748 ******** 2026-04-16 08:11:13.649438 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649449 | orchestrator | 2026-04-16 08:11:13.649460 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:11:13.649471 | orchestrator | Thursday 16 April 2026 08:10:27 +0000 (0:00:01.121) 0:24:33.869 ******** 2026-04-16 08:11:13.649482 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649493 | orchestrator | 2026-04-16 08:11:13.649505 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:11:13.649544 | orchestrator | Thursday 16 April 2026 08:10:28 +0000 (0:00:01.130) 0:24:35.000 ******** 2026-04-16 08:11:13.649556 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649567 | orchestrator | 2026-04-16 08:11:13.649578 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:11:13.649589 | orchestrator | Thursday 16 April 2026 08:10:29 +0000 (0:00:01.138) 0:24:36.138 ******** 2026-04-16 08:11:13.649602 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649616 | orchestrator | 2026-04-16 08:11:13.649628 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:11:13.649645 | orchestrator | Thursday 16 April 2026 08:10:30 +0000 (0:00:01.127) 0:24:37.266 ******** 2026-04-16 08:11:13.649663 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649682 | orchestrator | 2026-04-16 08:11:13.649700 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:11:13.649719 | orchestrator | Thursday 16 April 2026 08:10:31 +0000 (0:00:01.097) 0:24:38.363 ******** 2026-04-16 08:11:13.649738 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649757 | orchestrator | 2026-04-16 08:11:13.649774 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:11:13.649790 | orchestrator | Thursday 16 April 2026 08:10:32 +0000 (0:00:01.132) 0:24:39.496 ******** 2026-04-16 08:11:13.649808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:11:13.649825 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:11:13.649843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 08:11:13.649859 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.649875 | orchestrator | 2026-04-16 08:11:13.649892 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:11:13.649946 | orchestrator | Thursday 16 April 2026 08:10:34 +0000 (0:00:01.694) 0:24:41.191 ******** 2026-04-16 08:11:13.649965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:11:13.649984 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:11:13.650003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 08:11:13.650098 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.650111 | orchestrator | 2026-04-16 08:11:13.650123 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:11:13.650133 | orchestrator | Thursday 16 April 2026 08:10:36 +0000 (0:00:01.659) 0:24:42.850 ******** 2026-04-16 08:11:13.650144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:11:13.650155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:11:13.650166 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-04-16 08:11:13.650177 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.650188 | orchestrator | 2026-04-16 08:11:13.650199 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:11:13.650210 | orchestrator | Thursday 16 April 2026 08:10:37 +0000 (0:00:01.686) 0:24:44.536 ******** 2026-04-16 08:11:13.650221 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.650232 | orchestrator | 2026-04-16 08:11:13.650242 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:11:13.650253 | orchestrator | Thursday 16 April 2026 08:10:38 +0000 (0:00:01.114) 0:24:45.651 ******** 2026-04-16 08:11:13.650265 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-04-16 08:11:13.650276 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.650286 | orchestrator | 2026-04-16 08:11:13.650297 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:11:13.650329 | orchestrator | Thursday 16 April 2026 08:10:40 +0000 (0:00:01.215) 0:24:46.867 ******** 2026-04-16 08:11:13.650341 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:13.650353 | orchestrator | 2026-04-16 08:11:13.650364 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:11:13.650391 | orchestrator | Thursday 16 April 2026 08:10:41 +0000 (0:00:01.727) 0:24:48.595 ******** 2026-04-16 08:11:13.650409 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:11:13.650428 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:11:13.650447 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:11:13.650466 | orchestrator | 2026-04-16 08:11:13.650484 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 08:11:13.650503 | orchestrator | Thursday 16 April 2026 08:10:43 +0000 (0:00:01.586) 0:24:50.181 ******** 2026-04-16 08:11:13.650521 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-04-16 08:11:13.650538 | orchestrator | 2026-04-16 08:11:13.650555 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-16 08:11:13.650573 | orchestrator | Thursday 16 April 2026 08:10:44 +0000 (0:00:01.435) 0:24:51.617 ******** 2026-04-16 08:11:13.650590 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:13.650607 | orchestrator | 2026-04-16 08:11:13.650624 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-16 08:11:13.650644 | orchestrator | Thursday 16 April 2026 08:10:46 +0000 (0:00:01.508) 0:24:53.126 ******** 2026-04-16 08:11:13.650662 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.650680 | orchestrator | 2026-04-16 08:11:13.650699 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-16 08:11:13.650718 | orchestrator | Thursday 16 April 2026 08:10:47 +0000 (0:00:01.122) 0:24:54.248 ******** 2026-04-16 08:11:13.650737 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 08:11:13.650756 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 08:11:13.650775 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 08:11:13.650787 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-04-16 08:11:13.650798 | orchestrator | 2026-04-16 08:11:13.650809 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-16 08:11:13.650820 | orchestrator | Thursday 16 April 2026 08:10:55 +0000 (0:00:07.569) 0:25:01.818 ******** 2026-04-16 08:11:13.650830 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:13.650846 | orchestrator | 2026-04-16 08:11:13.650857 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-16 08:11:13.650868 | orchestrator | Thursday 16 April 2026 08:10:56 +0000 (0:00:01.160) 0:25:02.979 ******** 2026-04-16 08:11:13.650879 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-16 08:11:13.650890 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 08:11:13.650965 | orchestrator | 2026-04-16 08:11:13.650979 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:11:13.650990 | orchestrator | Thursday 16 April 2026 08:10:59 +0000 (0:00:03.559) 0:25:06.539 ******** 2026-04-16 08:11:13.651001 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-16 08:11:13.651011 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 08:11:13.651022 | orchestrator | 2026-04-16 08:11:13.651033 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-16 08:11:13.651043 | orchestrator | Thursday 16 April 2026 08:11:01 +0000 (0:00:01.978) 0:25:08.518 ******** 2026-04-16 08:11:13.651054 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:13.651065 | orchestrator | 2026-04-16 08:11:13.651085 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-16 08:11:13.651105 | orchestrator | Thursday 16 April 2026 08:11:03 +0000 (0:00:01.451) 0:25:09.969 ******** 2026-04-16 08:11:13.651124 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.651143 | orchestrator | 2026-04-16 08:11:13.651162 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 08:11:13.651180 | orchestrator | Thursday 16 April 2026 08:11:04 +0000 (0:00:01.092) 0:25:11.061 ******** 2026-04-16 08:11:13.651216 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.651238 | orchestrator | 2026-04-16 08:11:13.651258 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 08:11:13.651276 | orchestrator | Thursday 16 April 2026 08:11:05 +0000 (0:00:01.096) 0:25:12.157 ******** 2026-04-16 08:11:13.651288 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-04-16 08:11:13.651299 | orchestrator | 2026-04-16 08:11:13.651319 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-16 08:11:13.651330 | orchestrator | Thursday 16 April 2026 08:11:06 +0000 (0:00:01.432) 0:25:13.590 ******** 2026-04-16 08:11:13.651343 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.651361 | orchestrator | 2026-04-16 08:11:13.651377 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-16 08:11:13.651397 | orchestrator | Thursday 16 April 2026 08:11:07 +0000 (0:00:01.112) 0:25:14.702 ******** 2026-04-16 08:11:13.651409 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:13.651419 | orchestrator | 2026-04-16 08:11:13.651430 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-16 08:11:13.651441 | orchestrator | Thursday 16 April 2026 08:11:09 +0000 (0:00:01.187) 0:25:15.890 ******** 2026-04-16 08:11:13.651452 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-04-16 08:11:13.651463 | orchestrator | 2026-04-16 08:11:13.651473 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-16 08:11:13.651484 | orchestrator | Thursday 16 April 2026 08:11:10 +0000 (0:00:01.461) 0:25:17.351 ******** 2026-04-16 08:11:13.651495 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:13.651506 | orchestrator | 2026-04-16 08:11:13.651517 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-16 08:11:13.651528 | orchestrator | Thursday 16 April 2026 08:11:12 +0000 (0:00:02.077) 0:25:19.429 ******** 2026-04-16 08:11:13.651539 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:13.651550 | orchestrator | 2026-04-16 08:11:13.651574 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-16 08:11:47.428287 | orchestrator | Thursday 16 April 2026 08:11:14 +0000 (0:00:01.941) 0:25:21.370 ******** 2026-04-16 08:11:47.428409 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:11:47.428428 | orchestrator | 2026-04-16 08:11:47.428451 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-16 08:11:47.428470 | orchestrator | Thursday 16 April 2026 08:11:16 +0000 (0:00:02.368) 0:25:23.739 ******** 2026-04-16 08:11:47.428490 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:11:47.428512 | orchestrator | 2026-04-16 08:11:47.428532 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 08:11:47.428552 | orchestrator | Thursday 16 April 2026 08:11:20 +0000 (0:00:03.954) 0:25:27.694 ******** 2026-04-16 08:11:47.428567 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:11:47.428578 | orchestrator | 2026-04-16 08:11:47.428590 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-16 08:11:47.428601 | orchestrator | 2026-04-16 08:11:47.428612 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-16 08:11:47.428623 | orchestrator | Thursday 16 April 2026 08:11:21 +0000 (0:00:00.956) 0:25:28.651 ******** 2026-04-16 08:11:47.428634 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:11:47.428645 | orchestrator | 2026-04-16 08:11:47.428656 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-16 08:11:47.428667 | orchestrator | Thursday 16 April 2026 08:11:24 +0000 (0:00:02.460) 0:25:31.111 ******** 2026-04-16 08:11:47.428678 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:11:47.428772 | orchestrator | 2026-04-16 08:11:47.428784 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:11:47.428795 | orchestrator | Thursday 16 April 2026 08:11:26 +0000 (0:00:02.158) 0:25:33.269 ******** 2026-04-16 08:11:47.428807 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-04-16 08:11:47.428848 | orchestrator | 2026-04-16 08:11:47.428862 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:11:47.428875 | orchestrator | Thursday 16 April 2026 08:11:27 +0000 (0:00:01.123) 0:25:34.393 ******** 2026-04-16 08:11:47.428888 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.428901 | orchestrator | 2026-04-16 08:11:47.428941 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:11:47.428955 | orchestrator | Thursday 16 April 2026 08:11:29 +0000 (0:00:01.443) 0:25:35.836 ******** 2026-04-16 08:11:47.428973 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.428991 | orchestrator | 2026-04-16 08:11:47.429009 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:11:47.429028 | orchestrator | Thursday 16 April 2026 08:11:30 +0000 (0:00:01.102) 0:25:36.940 ******** 2026-04-16 08:11:47.429045 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.429064 | orchestrator | 2026-04-16 08:11:47.429082 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:11:47.429102 | orchestrator | Thursday 16 April 2026 08:11:31 +0000 (0:00:01.427) 0:25:38.367 ******** 2026-04-16 08:11:47.429120 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.429137 | orchestrator | 2026-04-16 08:11:47.429157 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:11:47.429198 | orchestrator | Thursday 16 April 2026 08:11:32 +0000 (0:00:01.125) 0:25:39.493 ******** 2026-04-16 08:11:47.429232 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.429250 | orchestrator | 2026-04-16 08:11:47.429268 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:11:47.429288 | orchestrator | Thursday 16 April 2026 08:11:33 +0000 (0:00:01.106) 0:25:40.600 ******** 2026-04-16 08:11:47.429307 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.429325 | orchestrator | 2026-04-16 08:11:47.429345 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:11:47.429365 | orchestrator | Thursday 16 April 2026 08:11:35 +0000 (0:00:01.164) 0:25:41.764 ******** 2026-04-16 08:11:47.429384 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:11:47.429402 | orchestrator | 2026-04-16 08:11:47.429421 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:11:47.429441 | orchestrator | Thursday 16 April 2026 08:11:36 +0000 (0:00:01.117) 0:25:42.881 ******** 2026-04-16 08:11:47.429452 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.429463 | orchestrator | 2026-04-16 08:11:47.429474 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:11:47.429485 | orchestrator | Thursday 16 April 2026 08:11:37 +0000 (0:00:01.190) 0:25:44.072 ******** 2026-04-16 08:11:47.429512 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:11:47.429523 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 08:11:47.429534 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:11:47.429545 | orchestrator | 2026-04-16 08:11:47.429556 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:11:47.429567 | orchestrator | Thursday 16 April 2026 08:11:38 +0000 (0:00:01.673) 0:25:45.745 ******** 2026-04-16 08:11:47.429577 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:11:47.429588 | orchestrator | 2026-04-16 08:11:47.429599 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:11:47.429609 | orchestrator | Thursday 16 April 2026 08:11:40 +0000 (0:00:01.208) 0:25:46.954 ******** 2026-04-16 08:11:47.429620 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:11:47.429631 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 08:11:47.429642 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:11:47.429652 | orchestrator | 2026-04-16 08:11:47.429663 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:11:47.429687 | orchestrator | Thursday 16 April 2026 08:11:42 +0000 (0:00:02.780) 0:25:49.734 ******** 2026-04-16 08:11:47.429698 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 08:11:47.429731 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 08:11:47.429742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 08:11:47.429753 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:11:47.429765 | orchestrator | 2026-04-16 08:11:47.429775 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:11:47.429786 | orchestrator | Thursday 16 April 2026 08:11:44 +0000 (0:00:01.354) 0:25:51.089 ******** 2026-04-16 08:11:47.429800 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:11:47.429814 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:11:47.429826 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:11:47.429837 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:11:47.429848 | orchestrator | 2026-04-16 08:11:47.429859 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:11:47.429870 | orchestrator | Thursday 16 April 2026 08:11:46 +0000 (0:00:01.850) 0:25:52.940 ******** 2026-04-16 08:11:47.429883 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:11:47.429897 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:11:47.429909 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:11:47.430003 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:11:47.430088 | orchestrator | 2026-04-16 08:11:47.430101 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:11:47.430112 | orchestrator | Thursday 16 April 2026 08:11:47 +0000 (0:00:01.131) 0:25:54.071 ******** 2026-04-16 08:11:47.430133 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:11:40.704057', 'end': '2026-04-16 08:11:40.759076', 'delta': '0:00:00.055019', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:11:47.430170 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:11:41.245242', 'end': '2026-04-16 08:11:41.299937', 'delta': '0:00:00.054695', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:12:05.420472 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:11:41.810329', 'end': '2026-04-16 08:11:41.856360', 'delta': '0:00:00.046031', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:12:05.420587 | orchestrator | 2026-04-16 08:12:05.420603 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:12:05.420616 | orchestrator | Thursday 16 April 2026 08:11:48 +0000 (0:00:01.203) 0:25:55.275 ******** 2026-04-16 08:12:05.420626 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:05.420637 | orchestrator | 2026-04-16 08:12:05.420647 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:12:05.420657 | orchestrator | Thursday 16 April 2026 08:11:49 +0000 (0:00:01.204) 0:25:56.480 ******** 2026-04-16 08:12:05.420667 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.420677 | orchestrator | 2026-04-16 08:12:05.420686 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:12:05.420696 | orchestrator | Thursday 16 April 2026 08:11:50 +0000 (0:00:01.195) 0:25:57.676 ******** 2026-04-16 08:12:05.420705 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:05.420715 | orchestrator | 2026-04-16 08:12:05.420725 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:12:05.420735 | orchestrator | Thursday 16 April 2026 08:11:52 +0000 (0:00:01.105) 0:25:58.781 ******** 2026-04-16 08:12:05.420744 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:12:05.420754 | orchestrator | 2026-04-16 08:12:05.420764 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:12:05.420773 | orchestrator | Thursday 16 April 2026 08:11:53 +0000 (0:00:01.946) 0:26:00.728 ******** 2026-04-16 08:12:05.420783 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:05.420792 | orchestrator | 2026-04-16 08:12:05.420802 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:12:05.420812 | orchestrator | Thursday 16 April 2026 08:11:55 +0000 (0:00:01.161) 0:26:01.889 ******** 2026-04-16 08:12:05.420821 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.420831 | orchestrator | 2026-04-16 08:12:05.420841 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:12:05.420850 | orchestrator | Thursday 16 April 2026 08:11:56 +0000 (0:00:01.121) 0:26:03.011 ******** 2026-04-16 08:12:05.420860 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.420869 | orchestrator | 2026-04-16 08:12:05.420879 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:12:05.420908 | orchestrator | Thursday 16 April 2026 08:11:57 +0000 (0:00:01.234) 0:26:04.246 ******** 2026-04-16 08:12:05.420918 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.420958 | orchestrator | 2026-04-16 08:12:05.420968 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:12:05.420977 | orchestrator | Thursday 16 April 2026 08:11:58 +0000 (0:00:01.149) 0:26:05.395 ******** 2026-04-16 08:12:05.420987 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.420996 | orchestrator | 2026-04-16 08:12:05.421006 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:12:05.421018 | orchestrator | Thursday 16 April 2026 08:11:59 +0000 (0:00:01.083) 0:26:06.479 ******** 2026-04-16 08:12:05.421029 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.421039 | orchestrator | 2026-04-16 08:12:05.421050 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:12:05.421075 | orchestrator | Thursday 16 April 2026 08:12:00 +0000 (0:00:01.117) 0:26:07.596 ******** 2026-04-16 08:12:05.421086 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.421098 | orchestrator | 2026-04-16 08:12:05.421109 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:12:05.421121 | orchestrator | Thursday 16 April 2026 08:12:01 +0000 (0:00:01.116) 0:26:08.713 ******** 2026-04-16 08:12:05.421131 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.421142 | orchestrator | 2026-04-16 08:12:05.421154 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:12:05.421169 | orchestrator | Thursday 16 April 2026 08:12:03 +0000 (0:00:01.100) 0:26:09.813 ******** 2026-04-16 08:12:05.421186 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.421202 | orchestrator | 2026-04-16 08:12:05.421226 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:12:05.421247 | orchestrator | Thursday 16 April 2026 08:12:04 +0000 (0:00:01.134) 0:26:10.948 ******** 2026-04-16 08:12:05.421263 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:05.421279 | orchestrator | 2026-04-16 08:12:05.421295 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:12:05.421311 | orchestrator | Thursday 16 April 2026 08:12:05 +0000 (0:00:01.091) 0:26:12.040 ******** 2026-04-16 08:12:05.421348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:05.421369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:05.421388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:05.421405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:12:05.421437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:05.421456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:05.421467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:05.421500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b3387fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:12:06.679338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:06.679466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:12:06.679481 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:06.679500 | orchestrator | 2026-04-16 08:12:06.679522 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:12:06.679565 | orchestrator | Thursday 16 April 2026 08:12:06 +0000 (0:00:01.225) 0:26:13.265 ******** 2026-04-16 08:12:06.679583 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679615 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679630 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679645 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679682 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679708 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679723 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679746 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6b3387fe', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1', 'scsi-SQEMU_QEMU_HARDDISK_6b3387fe-ddff-45b8-a1d5-c29892c481d8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:06.679774 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:43.084357 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:12:43.084450 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.084461 | orchestrator | 2026-04-16 08:12:43.084469 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:12:43.084477 | orchestrator | Thursday 16 April 2026 08:12:07 +0000 (0:00:01.233) 0:26:14.499 ******** 2026-04-16 08:12:43.084484 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.084491 | orchestrator | 2026-04-16 08:12:43.084498 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:12:43.084504 | orchestrator | Thursday 16 April 2026 08:12:09 +0000 (0:00:01.510) 0:26:16.010 ******** 2026-04-16 08:12:43.084511 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.084517 | orchestrator | 2026-04-16 08:12:43.084523 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:12:43.084530 | orchestrator | Thursday 16 April 2026 08:12:10 +0000 (0:00:01.117) 0:26:17.127 ******** 2026-04-16 08:12:43.084536 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.084543 | orchestrator | 2026-04-16 08:12:43.084549 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:12:43.084555 | orchestrator | Thursday 16 April 2026 08:12:11 +0000 (0:00:01.503) 0:26:18.630 ******** 2026-04-16 08:12:43.084562 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.084568 | orchestrator | 2026-04-16 08:12:43.084574 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:12:43.084580 | orchestrator | Thursday 16 April 2026 08:12:12 +0000 (0:00:01.105) 0:26:19.736 ******** 2026-04-16 08:12:43.084586 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.084593 | orchestrator | 2026-04-16 08:12:43.084599 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:12:43.084617 | orchestrator | Thursday 16 April 2026 08:12:14 +0000 (0:00:01.202) 0:26:20.939 ******** 2026-04-16 08:12:43.084624 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.084630 | orchestrator | 2026-04-16 08:12:43.084636 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:12:43.084642 | orchestrator | Thursday 16 April 2026 08:12:15 +0000 (0:00:01.122) 0:26:22.061 ******** 2026-04-16 08:12:43.084649 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-04-16 08:12:43.084655 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 08:12:43.084662 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-04-16 08:12:43.084668 | orchestrator | 2026-04-16 08:12:43.084674 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:12:43.084680 | orchestrator | Thursday 16 April 2026 08:12:16 +0000 (0:00:01.680) 0:26:23.742 ******** 2026-04-16 08:12:43.084687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-04-16 08:12:43.084694 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-04-16 08:12:43.084700 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-04-16 08:12:43.084725 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.084731 | orchestrator | 2026-04-16 08:12:43.084737 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:12:43.084744 | orchestrator | Thursday 16 April 2026 08:12:18 +0000 (0:00:01.137) 0:26:24.879 ******** 2026-04-16 08:12:43.084750 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.084756 | orchestrator | 2026-04-16 08:12:43.084762 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:12:43.084769 | orchestrator | Thursday 16 April 2026 08:12:19 +0000 (0:00:01.096) 0:26:25.975 ******** 2026-04-16 08:12:43.084775 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:12:43.084782 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 08:12:43.084788 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:12:43.084794 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:12:43.084800 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:12:43.084806 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:12:43.084813 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:12:43.084819 | orchestrator | 2026-04-16 08:12:43.084825 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:12:43.084831 | orchestrator | Thursday 16 April 2026 08:12:21 +0000 (0:00:02.061) 0:26:28.037 ******** 2026-04-16 08:12:43.084837 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:12:43.084843 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 08:12:43.084850 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:12:43.084856 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:12:43.084874 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:12:43.084880 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:12:43.084887 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:12:43.084893 | orchestrator | 2026-04-16 08:12:43.084899 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:12:43.084905 | orchestrator | Thursday 16 April 2026 08:12:23 +0000 (0:00:02.129) 0:26:30.166 ******** 2026-04-16 08:12:43.084911 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-04-16 08:12:43.084918 | orchestrator | 2026-04-16 08:12:43.084925 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:12:43.084931 | orchestrator | Thursday 16 April 2026 08:12:24 +0000 (0:00:01.154) 0:26:31.320 ******** 2026-04-16 08:12:43.084937 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-04-16 08:12:43.084970 | orchestrator | 2026-04-16 08:12:43.084976 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:12:43.084982 | orchestrator | Thursday 16 April 2026 08:12:25 +0000 (0:00:01.176) 0:26:32.497 ******** 2026-04-16 08:12:43.084989 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.084995 | orchestrator | 2026-04-16 08:12:43.085001 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:12:43.085007 | orchestrator | Thursday 16 April 2026 08:12:27 +0000 (0:00:01.504) 0:26:34.002 ******** 2026-04-16 08:12:43.085013 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085019 | orchestrator | 2026-04-16 08:12:43.085025 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:12:43.085031 | orchestrator | Thursday 16 April 2026 08:12:28 +0000 (0:00:01.119) 0:26:35.121 ******** 2026-04-16 08:12:43.085045 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085052 | orchestrator | 2026-04-16 08:12:43.085058 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:12:43.085064 | orchestrator | Thursday 16 April 2026 08:12:29 +0000 (0:00:01.104) 0:26:36.226 ******** 2026-04-16 08:12:43.085070 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085076 | orchestrator | 2026-04-16 08:12:43.085082 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:12:43.085089 | orchestrator | Thursday 16 April 2026 08:12:30 +0000 (0:00:01.099) 0:26:37.325 ******** 2026-04-16 08:12:43.085095 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.085101 | orchestrator | 2026-04-16 08:12:43.085111 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:12:43.085117 | orchestrator | Thursday 16 April 2026 08:12:32 +0000 (0:00:01.516) 0:26:38.842 ******** 2026-04-16 08:12:43.085123 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085130 | orchestrator | 2026-04-16 08:12:43.085136 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:12:43.085142 | orchestrator | Thursday 16 April 2026 08:12:33 +0000 (0:00:01.215) 0:26:40.058 ******** 2026-04-16 08:12:43.085148 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085154 | orchestrator | 2026-04-16 08:12:43.085161 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:12:43.085167 | orchestrator | Thursday 16 April 2026 08:12:34 +0000 (0:00:01.119) 0:26:41.177 ******** 2026-04-16 08:12:43.085174 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.085184 | orchestrator | 2026-04-16 08:12:43.085194 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:12:43.085205 | orchestrator | Thursday 16 April 2026 08:12:35 +0000 (0:00:01.545) 0:26:42.723 ******** 2026-04-16 08:12:43.085215 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.085225 | orchestrator | 2026-04-16 08:12:43.085235 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:12:43.085244 | orchestrator | Thursday 16 April 2026 08:12:37 +0000 (0:00:01.549) 0:26:44.273 ******** 2026-04-16 08:12:43.085254 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085264 | orchestrator | 2026-04-16 08:12:43.085273 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:12:43.085283 | orchestrator | Thursday 16 April 2026 08:12:38 +0000 (0:00:00.782) 0:26:45.055 ******** 2026-04-16 08:12:43.085293 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:12:43.085303 | orchestrator | 2026-04-16 08:12:43.085314 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:12:43.085324 | orchestrator | Thursday 16 April 2026 08:12:39 +0000 (0:00:00.813) 0:26:45.869 ******** 2026-04-16 08:12:43.085334 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085345 | orchestrator | 2026-04-16 08:12:43.085354 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:12:43.085365 | orchestrator | Thursday 16 April 2026 08:12:39 +0000 (0:00:00.757) 0:26:46.627 ******** 2026-04-16 08:12:43.085372 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085378 | orchestrator | 2026-04-16 08:12:43.085384 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:12:43.085390 | orchestrator | Thursday 16 April 2026 08:12:40 +0000 (0:00:00.800) 0:26:47.427 ******** 2026-04-16 08:12:43.085397 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085403 | orchestrator | 2026-04-16 08:12:43.085409 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:12:43.085415 | orchestrator | Thursday 16 April 2026 08:12:41 +0000 (0:00:00.783) 0:26:48.211 ******** 2026-04-16 08:12:43.085421 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085427 | orchestrator | 2026-04-16 08:12:43.085433 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:12:43.085439 | orchestrator | Thursday 16 April 2026 08:12:42 +0000 (0:00:00.796) 0:26:49.007 ******** 2026-04-16 08:12:43.085452 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:12:43.085458 | orchestrator | 2026-04-16 08:12:43.085464 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:12:43.085470 | orchestrator | Thursday 16 April 2026 08:12:43 +0000 (0:00:00.761) 0:26:49.769 ******** 2026-04-16 08:12:43.085483 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.510595 | orchestrator | 2026-04-16 08:13:23.510716 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:13:23.510735 | orchestrator | Thursday 16 April 2026 08:12:43 +0000 (0:00:00.761) 0:26:50.531 ******** 2026-04-16 08:13:23.510748 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.510761 | orchestrator | 2026-04-16 08:13:23.510772 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:13:23.510784 | orchestrator | Thursday 16 April 2026 08:12:44 +0000 (0:00:00.801) 0:26:51.332 ******** 2026-04-16 08:13:23.510795 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.510806 | orchestrator | 2026-04-16 08:13:23.510817 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:13:23.510828 | orchestrator | Thursday 16 April 2026 08:12:45 +0000 (0:00:00.797) 0:26:52.130 ******** 2026-04-16 08:13:23.510839 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.510850 | orchestrator | 2026-04-16 08:13:23.510861 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:13:23.510872 | orchestrator | Thursday 16 April 2026 08:12:46 +0000 (0:00:00.760) 0:26:52.890 ******** 2026-04-16 08:13:23.510883 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.510894 | orchestrator | 2026-04-16 08:13:23.510905 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:13:23.510916 | orchestrator | Thursday 16 April 2026 08:12:46 +0000 (0:00:00.770) 0:26:53.661 ******** 2026-04-16 08:13:23.510926 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.510937 | orchestrator | 2026-04-16 08:13:23.510948 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:13:23.511011 | orchestrator | Thursday 16 April 2026 08:12:47 +0000 (0:00:00.748) 0:26:54.410 ******** 2026-04-16 08:13:23.511024 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511035 | orchestrator | 2026-04-16 08:13:23.511046 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:13:23.511057 | orchestrator | Thursday 16 April 2026 08:12:48 +0000 (0:00:00.754) 0:26:55.164 ******** 2026-04-16 08:13:23.511068 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511079 | orchestrator | 2026-04-16 08:13:23.511090 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:13:23.511103 | orchestrator | Thursday 16 April 2026 08:12:49 +0000 (0:00:00.746) 0:26:55.911 ******** 2026-04-16 08:13:23.511115 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511127 | orchestrator | 2026-04-16 08:13:23.511141 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:13:23.511161 | orchestrator | Thursday 16 April 2026 08:12:49 +0000 (0:00:00.770) 0:26:56.681 ******** 2026-04-16 08:13:23.511198 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511217 | orchestrator | 2026-04-16 08:13:23.511235 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:13:23.511255 | orchestrator | Thursday 16 April 2026 08:12:50 +0000 (0:00:00.782) 0:26:57.464 ******** 2026-04-16 08:13:23.511274 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511295 | orchestrator | 2026-04-16 08:13:23.511315 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:13:23.511332 | orchestrator | Thursday 16 April 2026 08:12:51 +0000 (0:00:00.746) 0:26:58.210 ******** 2026-04-16 08:13:23.511346 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511358 | orchestrator | 2026-04-16 08:13:23.511369 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:13:23.511380 | orchestrator | Thursday 16 April 2026 08:12:52 +0000 (0:00:00.742) 0:26:58.953 ******** 2026-04-16 08:13:23.511414 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511426 | orchestrator | 2026-04-16 08:13:23.511437 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:13:23.511448 | orchestrator | Thursday 16 April 2026 08:12:52 +0000 (0:00:00.765) 0:26:59.718 ******** 2026-04-16 08:13:23.511459 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511469 | orchestrator | 2026-04-16 08:13:23.511480 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:13:23.511491 | orchestrator | Thursday 16 April 2026 08:12:53 +0000 (0:00:00.775) 0:27:00.494 ******** 2026-04-16 08:13:23.511501 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511512 | orchestrator | 2026-04-16 08:13:23.511523 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:13:23.511534 | orchestrator | Thursday 16 April 2026 08:12:54 +0000 (0:00:00.757) 0:27:01.251 ******** 2026-04-16 08:13:23.511545 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.511556 | orchestrator | 2026-04-16 08:13:23.511566 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:13:23.511577 | orchestrator | Thursday 16 April 2026 08:12:56 +0000 (0:00:01.547) 0:27:02.799 ******** 2026-04-16 08:13:23.511588 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.511599 | orchestrator | 2026-04-16 08:13:23.511609 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:13:23.511620 | orchestrator | Thursday 16 April 2026 08:12:58 +0000 (0:00:02.165) 0:27:04.965 ******** 2026-04-16 08:13:23.511631 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-04-16 08:13:23.511643 | orchestrator | 2026-04-16 08:13:23.511654 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:13:23.511665 | orchestrator | Thursday 16 April 2026 08:12:59 +0000 (0:00:01.150) 0:27:06.115 ******** 2026-04-16 08:13:23.511676 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511686 | orchestrator | 2026-04-16 08:13:23.511697 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:13:23.511708 | orchestrator | Thursday 16 April 2026 08:13:00 +0000 (0:00:01.145) 0:27:07.261 ******** 2026-04-16 08:13:23.511719 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511729 | orchestrator | 2026-04-16 08:13:23.511740 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:13:23.511751 | orchestrator | Thursday 16 April 2026 08:13:01 +0000 (0:00:01.125) 0:27:08.387 ******** 2026-04-16 08:13:23.511781 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:13:23.511793 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:13:23.511804 | orchestrator | 2026-04-16 08:13:23.511814 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:13:23.511825 | orchestrator | Thursday 16 April 2026 08:13:03 +0000 (0:00:01.868) 0:27:10.256 ******** 2026-04-16 08:13:23.511836 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.511847 | orchestrator | 2026-04-16 08:13:23.511858 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:13:23.511868 | orchestrator | Thursday 16 April 2026 08:13:04 +0000 (0:00:01.489) 0:27:11.745 ******** 2026-04-16 08:13:23.511879 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511890 | orchestrator | 2026-04-16 08:13:23.511900 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:13:23.511911 | orchestrator | Thursday 16 April 2026 08:13:06 +0000 (0:00:01.174) 0:27:12.920 ******** 2026-04-16 08:13:23.511922 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.511933 | orchestrator | 2026-04-16 08:13:23.511944 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:13:23.511955 | orchestrator | Thursday 16 April 2026 08:13:06 +0000 (0:00:00.772) 0:27:13.693 ******** 2026-04-16 08:13:23.511999 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512032 | orchestrator | 2026-04-16 08:13:23.512050 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:13:23.512062 | orchestrator | Thursday 16 April 2026 08:13:07 +0000 (0:00:00.770) 0:27:14.463 ******** 2026-04-16 08:13:23.512073 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-04-16 08:13:23.512084 | orchestrator | 2026-04-16 08:13:23.512094 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:13:23.512105 | orchestrator | Thursday 16 April 2026 08:13:08 +0000 (0:00:01.101) 0:27:15.565 ******** 2026-04-16 08:13:23.512116 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.512127 | orchestrator | 2026-04-16 08:13:23.512137 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:13:23.512148 | orchestrator | Thursday 16 April 2026 08:13:10 +0000 (0:00:01.667) 0:27:17.233 ******** 2026-04-16 08:13:23.512159 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:13:23.512170 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:13:23.512187 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:13:23.512206 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512230 | orchestrator | 2026-04-16 08:13:23.512257 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:13:23.512274 | orchestrator | Thursday 16 April 2026 08:13:11 +0000 (0:00:01.149) 0:27:18.382 ******** 2026-04-16 08:13:23.512292 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512310 | orchestrator | 2026-04-16 08:13:23.512330 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:13:23.512349 | orchestrator | Thursday 16 April 2026 08:13:12 +0000 (0:00:01.111) 0:27:19.493 ******** 2026-04-16 08:13:23.512368 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512388 | orchestrator | 2026-04-16 08:13:23.512407 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:13:23.512425 | orchestrator | Thursday 16 April 2026 08:13:13 +0000 (0:00:01.140) 0:27:20.634 ******** 2026-04-16 08:13:23.512443 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512461 | orchestrator | 2026-04-16 08:13:23.512479 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:13:23.512498 | orchestrator | Thursday 16 April 2026 08:13:15 +0000 (0:00:01.147) 0:27:21.781 ******** 2026-04-16 08:13:23.512515 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512533 | orchestrator | 2026-04-16 08:13:23.512551 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:13:23.512569 | orchestrator | Thursday 16 April 2026 08:13:16 +0000 (0:00:01.153) 0:27:22.934 ******** 2026-04-16 08:13:23.512587 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512604 | orchestrator | 2026-04-16 08:13:23.512620 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:13:23.512638 | orchestrator | Thursday 16 April 2026 08:13:16 +0000 (0:00:00.782) 0:27:23.716 ******** 2026-04-16 08:13:23.512657 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.512675 | orchestrator | 2026-04-16 08:13:23.512691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:13:23.512706 | orchestrator | Thursday 16 April 2026 08:13:19 +0000 (0:00:02.277) 0:27:25.994 ******** 2026-04-16 08:13:23.512722 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:23.512737 | orchestrator | 2026-04-16 08:13:23.512753 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:13:23.512769 | orchestrator | Thursday 16 April 2026 08:13:20 +0000 (0:00:00.766) 0:27:26.760 ******** 2026-04-16 08:13:23.512788 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-04-16 08:13:23.512807 | orchestrator | 2026-04-16 08:13:23.512826 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:13:23.512859 | orchestrator | Thursday 16 April 2026 08:13:21 +0000 (0:00:01.103) 0:27:27.864 ******** 2026-04-16 08:13:23.512878 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.512896 | orchestrator | 2026-04-16 08:13:23.512915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:13:23.512934 | orchestrator | Thursday 16 April 2026 08:13:22 +0000 (0:00:01.102) 0:27:28.967 ******** 2026-04-16 08:13:23.512953 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.513004 | orchestrator | 2026-04-16 08:13:23.513026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:13:23.513046 | orchestrator | Thursday 16 April 2026 08:13:23 +0000 (0:00:01.151) 0:27:30.119 ******** 2026-04-16 08:13:23.513066 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:23.513083 | orchestrator | 2026-04-16 08:13:23.513119 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:13:56.812861 | orchestrator | Thursday 16 April 2026 08:13:24 +0000 (0:00:01.133) 0:27:31.252 ******** 2026-04-16 08:13:56.813094 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813122 | orchestrator | 2026-04-16 08:13:56.813138 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:13:56.813152 | orchestrator | Thursday 16 April 2026 08:13:25 +0000 (0:00:01.131) 0:27:32.384 ******** 2026-04-16 08:13:56.813166 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813181 | orchestrator | 2026-04-16 08:13:56.813196 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:13:56.813210 | orchestrator | Thursday 16 April 2026 08:13:26 +0000 (0:00:01.115) 0:27:33.499 ******** 2026-04-16 08:13:56.813224 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813237 | orchestrator | 2026-04-16 08:13:56.813252 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:13:56.813265 | orchestrator | Thursday 16 April 2026 08:13:27 +0000 (0:00:01.130) 0:27:34.630 ******** 2026-04-16 08:13:56.813280 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813294 | orchestrator | 2026-04-16 08:13:56.813310 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:13:56.813321 | orchestrator | Thursday 16 April 2026 08:13:29 +0000 (0:00:01.195) 0:27:35.825 ******** 2026-04-16 08:13:56.813329 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813337 | orchestrator | 2026-04-16 08:13:56.813345 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:13:56.813353 | orchestrator | Thursday 16 April 2026 08:13:30 +0000 (0:00:01.128) 0:27:36.954 ******** 2026-04-16 08:13:56.813362 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:13:56.813371 | orchestrator | 2026-04-16 08:13:56.813380 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:13:56.813390 | orchestrator | Thursday 16 April 2026 08:13:30 +0000 (0:00:00.794) 0:27:37.749 ******** 2026-04-16 08:13:56.813399 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-04-16 08:13:56.813409 | orchestrator | 2026-04-16 08:13:56.813418 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:13:56.813431 | orchestrator | Thursday 16 April 2026 08:13:32 +0000 (0:00:01.122) 0:27:38.871 ******** 2026-04-16 08:13:56.813444 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-04-16 08:13:56.813459 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-04-16 08:13:56.813493 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-04-16 08:13:56.813509 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-04-16 08:13:56.813522 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-04-16 08:13:56.813537 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-04-16 08:13:56.813550 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-04-16 08:13:56.813563 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:13:56.813578 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:13:56.813623 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:13:56.813633 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:13:56.813640 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:13:56.813648 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:13:56.813656 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:13:56.813664 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-04-16 08:13:56.813672 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-04-16 08:13:56.813680 | orchestrator | 2026-04-16 08:13:56.813688 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:13:56.813696 | orchestrator | Thursday 16 April 2026 08:13:38 +0000 (0:00:06.590) 0:27:45.462 ******** 2026-04-16 08:13:56.813704 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813712 | orchestrator | 2026-04-16 08:13:56.813719 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:13:56.813727 | orchestrator | Thursday 16 April 2026 08:13:39 +0000 (0:00:00.762) 0:27:46.225 ******** 2026-04-16 08:13:56.813735 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813744 | orchestrator | 2026-04-16 08:13:56.813751 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:13:56.813759 | orchestrator | Thursday 16 April 2026 08:13:40 +0000 (0:00:00.757) 0:27:46.982 ******** 2026-04-16 08:13:56.813767 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813775 | orchestrator | 2026-04-16 08:13:56.813783 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:13:56.813791 | orchestrator | Thursday 16 April 2026 08:13:40 +0000 (0:00:00.764) 0:27:47.746 ******** 2026-04-16 08:13:56.813799 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813807 | orchestrator | 2026-04-16 08:13:56.813815 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:13:56.813823 | orchestrator | Thursday 16 April 2026 08:13:41 +0000 (0:00:00.742) 0:27:48.489 ******** 2026-04-16 08:13:56.813831 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813838 | orchestrator | 2026-04-16 08:13:56.813846 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:13:56.813854 | orchestrator | Thursday 16 April 2026 08:13:42 +0000 (0:00:00.683) 0:27:49.172 ******** 2026-04-16 08:13:56.813862 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813870 | orchestrator | 2026-04-16 08:13:56.813878 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:13:56.813886 | orchestrator | Thursday 16 April 2026 08:13:43 +0000 (0:00:00.595) 0:27:49.768 ******** 2026-04-16 08:13:56.813894 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813901 | orchestrator | 2026-04-16 08:13:56.813929 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:13:56.813938 | orchestrator | Thursday 16 April 2026 08:13:43 +0000 (0:00:00.733) 0:27:50.501 ******** 2026-04-16 08:13:56.813947 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.813961 | orchestrator | 2026-04-16 08:13:56.813999 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:13:56.814075 | orchestrator | Thursday 16 April 2026 08:13:44 +0000 (0:00:00.757) 0:27:51.259 ******** 2026-04-16 08:13:56.814093 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814102 | orchestrator | 2026-04-16 08:13:56.814110 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:13:56.814117 | orchestrator | Thursday 16 April 2026 08:13:45 +0000 (0:00:00.754) 0:27:52.013 ******** 2026-04-16 08:13:56.814129 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814142 | orchestrator | 2026-04-16 08:13:56.814155 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:13:56.814182 | orchestrator | Thursday 16 April 2026 08:13:45 +0000 (0:00:00.735) 0:27:52.748 ******** 2026-04-16 08:13:56.814196 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814210 | orchestrator | 2026-04-16 08:13:56.814221 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:13:56.814234 | orchestrator | Thursday 16 April 2026 08:13:46 +0000 (0:00:00.770) 0:27:53.519 ******** 2026-04-16 08:13:56.814248 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814261 | orchestrator | 2026-04-16 08:13:56.814274 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:13:56.814287 | orchestrator | Thursday 16 April 2026 08:13:47 +0000 (0:00:00.734) 0:27:54.253 ******** 2026-04-16 08:13:56.814299 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814312 | orchestrator | 2026-04-16 08:13:56.814324 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:13:56.814337 | orchestrator | Thursday 16 April 2026 08:13:48 +0000 (0:00:00.835) 0:27:55.089 ******** 2026-04-16 08:13:56.814349 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814361 | orchestrator | 2026-04-16 08:13:56.814374 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:13:56.814387 | orchestrator | Thursday 16 April 2026 08:13:49 +0000 (0:00:00.753) 0:27:55.843 ******** 2026-04-16 08:13:56.814399 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814412 | orchestrator | 2026-04-16 08:13:56.814426 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:13:56.814450 | orchestrator | Thursday 16 April 2026 08:13:49 +0000 (0:00:00.807) 0:27:56.651 ******** 2026-04-16 08:13:56.814464 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814478 | orchestrator | 2026-04-16 08:13:56.814489 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:13:56.814502 | orchestrator | Thursday 16 April 2026 08:13:50 +0000 (0:00:00.742) 0:27:57.393 ******** 2026-04-16 08:13:56.814516 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814528 | orchestrator | 2026-04-16 08:13:56.814542 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:13:56.814587 | orchestrator | Thursday 16 April 2026 08:13:51 +0000 (0:00:00.737) 0:27:58.131 ******** 2026-04-16 08:13:56.814602 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814615 | orchestrator | 2026-04-16 08:13:56.814629 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:13:56.814638 | orchestrator | Thursday 16 April 2026 08:13:52 +0000 (0:00:00.734) 0:27:58.865 ******** 2026-04-16 08:13:56.814646 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814653 | orchestrator | 2026-04-16 08:13:56.814661 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:13:56.814669 | orchestrator | Thursday 16 April 2026 08:13:52 +0000 (0:00:00.731) 0:27:59.597 ******** 2026-04-16 08:13:56.814677 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814685 | orchestrator | 2026-04-16 08:13:56.814693 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:13:56.814700 | orchestrator | Thursday 16 April 2026 08:13:53 +0000 (0:00:00.740) 0:28:00.337 ******** 2026-04-16 08:13:56.814708 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814716 | orchestrator | 2026-04-16 08:13:56.814724 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:13:56.814731 | orchestrator | Thursday 16 April 2026 08:13:54 +0000 (0:00:00.749) 0:28:01.087 ******** 2026-04-16 08:13:56.814739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 08:13:56.814747 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 08:13:56.814755 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 08:13:56.814762 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814770 | orchestrator | 2026-04-16 08:13:56.814778 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:13:56.814796 | orchestrator | Thursday 16 April 2026 08:13:55 +0000 (0:00:01.008) 0:28:02.095 ******** 2026-04-16 08:13:56.814804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 08:13:56.814812 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 08:13:56.814820 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 08:13:56.814828 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814835 | orchestrator | 2026-04-16 08:13:56.814843 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:13:56.814851 | orchestrator | Thursday 16 April 2026 08:13:56 +0000 (0:00:01.041) 0:28:03.137 ******** 2026-04-16 08:13:56.814859 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-04-16 08:13:56.814867 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-04-16 08:13:56.814875 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-04-16 08:13:56.814882 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:13:56.814890 | orchestrator | 2026-04-16 08:13:56.814909 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:14:55.113583 | orchestrator | Thursday 16 April 2026 08:13:57 +0000 (0:00:01.029) 0:28:04.166 ******** 2026-04-16 08:14:55.113673 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.113688 | orchestrator | 2026-04-16 08:14:55.113700 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:14:55.113710 | orchestrator | Thursday 16 April 2026 08:13:58 +0000 (0:00:00.784) 0:28:04.951 ******** 2026-04-16 08:14:55.113720 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-04-16 08:14:55.113730 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.113740 | orchestrator | 2026-04-16 08:14:55.113749 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:14:55.113759 | orchestrator | Thursday 16 April 2026 08:13:59 +0000 (0:00:00.862) 0:28:05.814 ******** 2026-04-16 08:14:55.113769 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.113779 | orchestrator | 2026-04-16 08:14:55.113789 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:14:55.113798 | orchestrator | Thursday 16 April 2026 08:14:00 +0000 (0:00:01.336) 0:28:07.150 ******** 2026-04-16 08:14:55.113808 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:14:55.113818 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-04-16 08:14:55.113828 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:14:55.113837 | orchestrator | 2026-04-16 08:14:55.113847 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 08:14:55.113857 | orchestrator | Thursday 16 April 2026 08:14:01 +0000 (0:00:01.544) 0:28:08.695 ******** 2026-04-16 08:14:55.113866 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-04-16 08:14:55.113876 | orchestrator | 2026-04-16 08:14:55.113886 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-16 08:14:55.113896 | orchestrator | Thursday 16 April 2026 08:14:03 +0000 (0:00:01.090) 0:28:09.785 ******** 2026-04-16 08:14:55.113905 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.113915 | orchestrator | 2026-04-16 08:14:55.113925 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-16 08:14:55.113934 | orchestrator | Thursday 16 April 2026 08:14:04 +0000 (0:00:01.497) 0:28:11.283 ******** 2026-04-16 08:14:55.113944 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.113954 | orchestrator | 2026-04-16 08:14:55.113963 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-16 08:14:55.114070 | orchestrator | Thursday 16 April 2026 08:14:05 +0000 (0:00:01.123) 0:28:12.406 ******** 2026-04-16 08:14:55.114085 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:14:55.114125 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:14:55.114135 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:14:55.114146 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-04-16 08:14:55.114157 | orchestrator | 2026-04-16 08:14:55.114168 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-16 08:14:55.114179 | orchestrator | Thursday 16 April 2026 08:14:12 +0000 (0:00:07.112) 0:28:19.519 ******** 2026-04-16 08:14:55.114190 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.114201 | orchestrator | 2026-04-16 08:14:55.114212 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-16 08:14:55.114223 | orchestrator | Thursday 16 April 2026 08:14:13 +0000 (0:00:01.159) 0:28:20.678 ******** 2026-04-16 08:14:55.114234 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 08:14:55.114244 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-04-16 08:14:55.114255 | orchestrator | 2026-04-16 08:14:55.114266 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:14:55.114277 | orchestrator | Thursday 16 April 2026 08:14:17 +0000 (0:00:03.232) 0:28:23.910 ******** 2026-04-16 08:14:55.114288 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 08:14:55.114299 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-04-16 08:14:55.114310 | orchestrator | 2026-04-16 08:14:55.114320 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-16 08:14:55.114331 | orchestrator | Thursday 16 April 2026 08:14:19 +0000 (0:00:02.014) 0:28:25.926 ******** 2026-04-16 08:14:55.114342 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.114353 | orchestrator | 2026-04-16 08:14:55.114364 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-16 08:14:55.114375 | orchestrator | Thursday 16 April 2026 08:14:20 +0000 (0:00:01.479) 0:28:27.405 ******** 2026-04-16 08:14:55.114386 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.114396 | orchestrator | 2026-04-16 08:14:55.114407 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 08:14:55.114418 | orchestrator | Thursday 16 April 2026 08:14:21 +0000 (0:00:00.757) 0:28:28.163 ******** 2026-04-16 08:14:55.114429 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.114440 | orchestrator | 2026-04-16 08:14:55.114451 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 08:14:55.114462 | orchestrator | Thursday 16 April 2026 08:14:22 +0000 (0:00:00.741) 0:28:28.905 ******** 2026-04-16 08:14:55.114472 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-04-16 08:14:55.114483 | orchestrator | 2026-04-16 08:14:55.114494 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-16 08:14:55.114505 | orchestrator | Thursday 16 April 2026 08:14:23 +0000 (0:00:01.099) 0:28:30.004 ******** 2026-04-16 08:14:55.114515 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.114524 | orchestrator | 2026-04-16 08:14:55.114534 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-16 08:14:55.114543 | orchestrator | Thursday 16 April 2026 08:14:24 +0000 (0:00:01.110) 0:28:31.115 ******** 2026-04-16 08:14:55.114553 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.114563 | orchestrator | 2026-04-16 08:14:55.114587 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-16 08:14:55.114598 | orchestrator | Thursday 16 April 2026 08:14:25 +0000 (0:00:01.121) 0:28:32.236 ******** 2026-04-16 08:14:55.114608 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-04-16 08:14:55.114617 | orchestrator | 2026-04-16 08:14:55.114627 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-16 08:14:55.114636 | orchestrator | Thursday 16 April 2026 08:14:26 +0000 (0:00:01.153) 0:28:33.389 ******** 2026-04-16 08:14:55.114646 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.114662 | orchestrator | 2026-04-16 08:14:55.114672 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-16 08:14:55.114682 | orchestrator | Thursday 16 April 2026 08:14:28 +0000 (0:00:02.031) 0:28:35.421 ******** 2026-04-16 08:14:55.114691 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.114701 | orchestrator | 2026-04-16 08:14:55.114711 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-16 08:14:55.114720 | orchestrator | Thursday 16 April 2026 08:14:30 +0000 (0:00:01.938) 0:28:37.360 ******** 2026-04-16 08:14:55.114730 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:14:55.114740 | orchestrator | 2026-04-16 08:14:55.114749 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-16 08:14:55.114759 | orchestrator | Thursday 16 April 2026 08:14:33 +0000 (0:00:02.451) 0:28:39.811 ******** 2026-04-16 08:14:55.114769 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:14:55.114778 | orchestrator | 2026-04-16 08:14:55.114788 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 08:14:55.114798 | orchestrator | Thursday 16 April 2026 08:14:36 +0000 (0:00:03.529) 0:28:43.341 ******** 2026-04-16 08:14:55.114807 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:14:55.114817 | orchestrator | 2026-04-16 08:14:55.114826 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-04-16 08:14:55.114836 | orchestrator | 2026-04-16 08:14:55.114846 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-04-16 08:14:55.114855 | orchestrator | Thursday 16 April 2026 08:14:37 +0000 (0:00:00.983) 0:28:44.325 ******** 2026-04-16 08:14:55.114865 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:14:55.114874 | orchestrator | 2026-04-16 08:14:55.114884 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-04-16 08:14:55.114893 | orchestrator | Thursday 16 April 2026 08:14:40 +0000 (0:00:02.637) 0:28:46.962 ******** 2026-04-16 08:14:55.114903 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:14:55.114913 | orchestrator | 2026-04-16 08:14:55.114928 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:14:55.114938 | orchestrator | Thursday 16 April 2026 08:14:42 +0000 (0:00:02.087) 0:28:49.050 ******** 2026-04-16 08:14:55.114947 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-04-16 08:14:55.114957 | orchestrator | 2026-04-16 08:14:55.114966 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:14:55.114976 | orchestrator | Thursday 16 April 2026 08:14:43 +0000 (0:00:01.142) 0:28:50.193 ******** 2026-04-16 08:14:55.115007 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115017 | orchestrator | 2026-04-16 08:14:55.115026 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:14:55.115036 | orchestrator | Thursday 16 April 2026 08:14:44 +0000 (0:00:01.480) 0:28:51.673 ******** 2026-04-16 08:14:55.115045 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115055 | orchestrator | 2026-04-16 08:14:55.115064 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:14:55.115074 | orchestrator | Thursday 16 April 2026 08:14:46 +0000 (0:00:01.142) 0:28:52.815 ******** 2026-04-16 08:14:55.115083 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115093 | orchestrator | 2026-04-16 08:14:55.115102 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:14:55.115112 | orchestrator | Thursday 16 April 2026 08:14:47 +0000 (0:00:01.394) 0:28:54.210 ******** 2026-04-16 08:14:55.115121 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115131 | orchestrator | 2026-04-16 08:14:55.115140 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:14:55.115150 | orchestrator | Thursday 16 April 2026 08:14:48 +0000 (0:00:01.104) 0:28:55.314 ******** 2026-04-16 08:14:55.115159 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115168 | orchestrator | 2026-04-16 08:14:55.115178 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:14:55.115187 | orchestrator | Thursday 16 April 2026 08:14:49 +0000 (0:00:01.099) 0:28:56.414 ******** 2026-04-16 08:14:55.115203 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115213 | orchestrator | 2026-04-16 08:14:55.115222 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:14:55.115232 | orchestrator | Thursday 16 April 2026 08:14:50 +0000 (0:00:01.112) 0:28:57.526 ******** 2026-04-16 08:14:55.115241 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:14:55.115251 | orchestrator | 2026-04-16 08:14:55.115260 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:14:55.115270 | orchestrator | Thursday 16 April 2026 08:14:51 +0000 (0:00:01.113) 0:28:58.640 ******** 2026-04-16 08:14:55.115279 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:14:55.115289 | orchestrator | 2026-04-16 08:14:55.115298 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:14:55.115308 | orchestrator | Thursday 16 April 2026 08:14:52 +0000 (0:00:01.114) 0:28:59.754 ******** 2026-04-16 08:14:55.115317 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:14:55.115327 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:14:55.115336 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:14:55.115346 | orchestrator | 2026-04-16 08:14:55.115355 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:14:55.115365 | orchestrator | Thursday 16 April 2026 08:14:54 +0000 (0:00:01.938) 0:29:01.693 ******** 2026-04-16 08:14:55.115380 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:19.962184 | orchestrator | 2026-04-16 08:15:19.962304 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:15:19.962323 | orchestrator | Thursday 16 April 2026 08:14:56 +0000 (0:00:01.252) 0:29:02.945 ******** 2026-04-16 08:15:19.962336 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:15:19.962348 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:15:19.962360 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:15:19.962371 | orchestrator | 2026-04-16 08:15:19.962383 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:15:19.962394 | orchestrator | Thursday 16 April 2026 08:14:59 +0000 (0:00:03.171) 0:29:06.117 ******** 2026-04-16 08:15:19.962406 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:15:19.962417 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:15:19.962429 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:15:19.962440 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.962451 | orchestrator | 2026-04-16 08:15:19.962462 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:15:19.962473 | orchestrator | Thursday 16 April 2026 08:15:01 +0000 (0:00:01.752) 0:29:07.869 ******** 2026-04-16 08:15:19.962486 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:15:19.962500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:15:19.962512 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:15:19.962538 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.962550 | orchestrator | 2026-04-16 08:15:19.962561 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:15:19.962596 | orchestrator | Thursday 16 April 2026 08:15:02 +0000 (0:00:01.876) 0:29:09.746 ******** 2026-04-16 08:15:19.962610 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:19.962625 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:19.962638 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:19.962652 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.962665 | orchestrator | 2026-04-16 08:15:19.962678 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:15:19.962690 | orchestrator | Thursday 16 April 2026 08:15:04 +0000 (0:00:01.122) 0:29:10.869 ******** 2026-04-16 08:15:19.962722 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:14:56.725233', 'end': '2026-04-16 08:14:56.770268', 'delta': '0:00:00.045035', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:15:19.962739 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:14:57.566613', 'end': '2026-04-16 08:14:57.628611', 'delta': '0:00:00.061998', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:15:19.962753 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:14:58.149829', 'end': '2026-04-16 08:14:58.201861', 'delta': '0:00:00.052032', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:15:19.962774 | orchestrator | 2026-04-16 08:15:19.962787 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:15:19.962805 | orchestrator | Thursday 16 April 2026 08:15:05 +0000 (0:00:01.168) 0:29:12.037 ******** 2026-04-16 08:15:19.962818 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:19.962831 | orchestrator | 2026-04-16 08:15:19.962844 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:15:19.962857 | orchestrator | Thursday 16 April 2026 08:15:06 +0000 (0:00:01.199) 0:29:13.237 ******** 2026-04-16 08:15:19.962868 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.962878 | orchestrator | 2026-04-16 08:15:19.962889 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:15:19.962900 | orchestrator | Thursday 16 April 2026 08:15:07 +0000 (0:00:01.255) 0:29:14.493 ******** 2026-04-16 08:15:19.962910 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:19.962921 | orchestrator | 2026-04-16 08:15:19.962932 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:15:19.962943 | orchestrator | Thursday 16 April 2026 08:15:08 +0000 (0:00:01.186) 0:29:15.680 ******** 2026-04-16 08:15:19.962953 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:15:19.962965 | orchestrator | 2026-04-16 08:15:19.962975 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:15:19.963040 | orchestrator | Thursday 16 April 2026 08:15:10 +0000 (0:00:02.042) 0:29:17.722 ******** 2026-04-16 08:15:19.963062 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:19.963082 | orchestrator | 2026-04-16 08:15:19.963100 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:15:19.963114 | orchestrator | Thursday 16 April 2026 08:15:12 +0000 (0:00:01.149) 0:29:18.872 ******** 2026-04-16 08:15:19.963125 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963136 | orchestrator | 2026-04-16 08:15:19.963147 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:15:19.963157 | orchestrator | Thursday 16 April 2026 08:15:13 +0000 (0:00:01.099) 0:29:19.972 ******** 2026-04-16 08:15:19.963168 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963179 | orchestrator | 2026-04-16 08:15:19.963189 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:15:19.963200 | orchestrator | Thursday 16 April 2026 08:15:14 +0000 (0:00:01.189) 0:29:21.162 ******** 2026-04-16 08:15:19.963211 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963222 | orchestrator | 2026-04-16 08:15:19.963233 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:15:19.963243 | orchestrator | Thursday 16 April 2026 08:15:15 +0000 (0:00:01.116) 0:29:22.278 ******** 2026-04-16 08:15:19.963254 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963265 | orchestrator | 2026-04-16 08:15:19.963276 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:15:19.963287 | orchestrator | Thursday 16 April 2026 08:15:16 +0000 (0:00:01.107) 0:29:23.386 ******** 2026-04-16 08:15:19.963298 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963309 | orchestrator | 2026-04-16 08:15:19.963346 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:15:19.963357 | orchestrator | Thursday 16 April 2026 08:15:17 +0000 (0:00:01.108) 0:29:24.495 ******** 2026-04-16 08:15:19.963368 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963379 | orchestrator | 2026-04-16 08:15:19.963390 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:15:19.963401 | orchestrator | Thursday 16 April 2026 08:15:18 +0000 (0:00:01.114) 0:29:25.609 ******** 2026-04-16 08:15:19.963412 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:19.963422 | orchestrator | 2026-04-16 08:15:19.963434 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:15:19.963454 | orchestrator | Thursday 16 April 2026 08:15:19 +0000 (0:00:01.098) 0:29:26.708 ******** 2026-04-16 08:15:23.481718 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:23.481857 | orchestrator | 2026-04-16 08:15:23.481886 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:15:23.481907 | orchestrator | Thursday 16 April 2026 08:15:21 +0000 (0:00:01.136) 0:29:27.844 ******** 2026-04-16 08:15:23.481927 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:23.481947 | orchestrator | 2026-04-16 08:15:23.481966 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:15:23.482010 | orchestrator | Thursday 16 April 2026 08:15:22 +0000 (0:00:01.097) 0:29:28.941 ******** 2026-04-16 08:15:23.482113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:15:23.482233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a571ce0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:15:23.482400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:15:23.482429 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:23.482441 | orchestrator | 2026-04-16 08:15:23.482455 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:15:23.482468 | orchestrator | Thursday 16 April 2026 08:15:23 +0000 (0:00:01.212) 0:29:30.154 ******** 2026-04-16 08:15:23.482482 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:23.482496 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:23.482526 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324386 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-39-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324505 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324520 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324530 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324572 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4a571ce0', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_4a571ce0-7910-4acd-a84f-c7c407a3a7e5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324609 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324619 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:15:34.324630 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:34.324641 | orchestrator | 2026-04-16 08:15:34.324650 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:15:34.324660 | orchestrator | Thursday 16 April 2026 08:15:24 +0000 (0:00:01.225) 0:29:31.379 ******** 2026-04-16 08:15:34.324669 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:34.324679 | orchestrator | 2026-04-16 08:15:34.324688 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:15:34.324697 | orchestrator | Thursday 16 April 2026 08:15:26 +0000 (0:00:01.541) 0:29:32.921 ******** 2026-04-16 08:15:34.324705 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:34.324720 | orchestrator | 2026-04-16 08:15:34.324729 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:15:34.324738 | orchestrator | Thursday 16 April 2026 08:15:27 +0000 (0:00:01.138) 0:29:34.060 ******** 2026-04-16 08:15:34.324747 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:15:34.324755 | orchestrator | 2026-04-16 08:15:34.324764 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:15:34.324773 | orchestrator | Thursday 16 April 2026 08:15:28 +0000 (0:00:01.468) 0:29:35.529 ******** 2026-04-16 08:15:34.324781 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:34.324790 | orchestrator | 2026-04-16 08:15:34.324799 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:15:34.324808 | orchestrator | Thursday 16 April 2026 08:15:29 +0000 (0:00:01.124) 0:29:36.653 ******** 2026-04-16 08:15:34.324816 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:34.324825 | orchestrator | 2026-04-16 08:15:34.324834 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:15:34.324843 | orchestrator | Thursday 16 April 2026 08:15:31 +0000 (0:00:01.208) 0:29:37.861 ******** 2026-04-16 08:15:34.324851 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:34.324863 | orchestrator | 2026-04-16 08:15:34.324879 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:15:34.324894 | orchestrator | Thursday 16 April 2026 08:15:32 +0000 (0:00:01.116) 0:29:38.978 ******** 2026-04-16 08:15:34.324909 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-04-16 08:15:34.324925 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-04-16 08:15:34.324939 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:15:34.324954 | orchestrator | 2026-04-16 08:15:34.324969 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:15:34.324985 | orchestrator | Thursday 16 April 2026 08:15:34 +0000 (0:00:01.919) 0:29:40.898 ******** 2026-04-16 08:15:34.325059 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-04-16 08:15:34.325074 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-04-16 08:15:34.325088 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-04-16 08:15:34.325103 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:15:34.325112 | orchestrator | 2026-04-16 08:15:34.325130 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:16:10.133964 | orchestrator | Thursday 16 April 2026 08:15:35 +0000 (0:00:01.140) 0:29:42.038 ******** 2026-04-16 08:16:10.134252 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.134285 | orchestrator | 2026-04-16 08:16:10.134308 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:16:10.134322 | orchestrator | Thursday 16 April 2026 08:15:36 +0000 (0:00:01.129) 0:29:43.168 ******** 2026-04-16 08:16:10.134333 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:16:10.134345 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:16:10.134356 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:16:10.134368 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:16:10.134379 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:16:10.134390 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:16:10.134403 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:16:10.134422 | orchestrator | 2026-04-16 08:16:10.134441 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:16:10.134459 | orchestrator | Thursday 16 April 2026 08:15:38 +0000 (0:00:02.099) 0:29:45.268 ******** 2026-04-16 08:16:10.134479 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:16:10.134528 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:16:10.134564 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:16:10.134586 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:16:10.134604 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:16:10.134622 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:16:10.134640 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:16:10.134659 | orchestrator | 2026-04-16 08:16:10.134671 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:16:10.134682 | orchestrator | Thursday 16 April 2026 08:15:40 +0000 (0:00:02.141) 0:29:47.409 ******** 2026-04-16 08:16:10.134693 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-04-16 08:16:10.134711 | orchestrator | 2026-04-16 08:16:10.134729 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:16:10.134747 | orchestrator | Thursday 16 April 2026 08:15:41 +0000 (0:00:01.107) 0:29:48.517 ******** 2026-04-16 08:16:10.134765 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-04-16 08:16:10.134782 | orchestrator | 2026-04-16 08:16:10.134797 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:16:10.134814 | orchestrator | Thursday 16 April 2026 08:15:42 +0000 (0:00:01.086) 0:29:49.603 ******** 2026-04-16 08:16:10.134832 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.134850 | orchestrator | 2026-04-16 08:16:10.134868 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:16:10.134887 | orchestrator | Thursday 16 April 2026 08:15:44 +0000 (0:00:01.547) 0:29:51.151 ******** 2026-04-16 08:16:10.134906 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.134923 | orchestrator | 2026-04-16 08:16:10.134941 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:16:10.134958 | orchestrator | Thursday 16 April 2026 08:15:45 +0000 (0:00:01.117) 0:29:52.268 ******** 2026-04-16 08:16:10.134977 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135021 | orchestrator | 2026-04-16 08:16:10.135042 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:16:10.135061 | orchestrator | Thursday 16 April 2026 08:15:46 +0000 (0:00:01.090) 0:29:53.359 ******** 2026-04-16 08:16:10.135072 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135083 | orchestrator | 2026-04-16 08:16:10.135094 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:16:10.135105 | orchestrator | Thursday 16 April 2026 08:15:47 +0000 (0:00:01.110) 0:29:54.470 ******** 2026-04-16 08:16:10.135115 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135126 | orchestrator | 2026-04-16 08:16:10.135137 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:16:10.135150 | orchestrator | Thursday 16 April 2026 08:15:49 +0000 (0:00:01.515) 0:29:55.985 ******** 2026-04-16 08:16:10.135169 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135187 | orchestrator | 2026-04-16 08:16:10.135205 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:16:10.135222 | orchestrator | Thursday 16 April 2026 08:15:50 +0000 (0:00:01.143) 0:29:57.128 ******** 2026-04-16 08:16:10.135240 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135260 | orchestrator | 2026-04-16 08:16:10.135278 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:16:10.135296 | orchestrator | Thursday 16 April 2026 08:15:51 +0000 (0:00:01.091) 0:29:58.220 ******** 2026-04-16 08:16:10.135311 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135323 | orchestrator | 2026-04-16 08:16:10.135334 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:16:10.135358 | orchestrator | Thursday 16 April 2026 08:15:52 +0000 (0:00:01.512) 0:29:59.733 ******** 2026-04-16 08:16:10.135369 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135380 | orchestrator | 2026-04-16 08:16:10.135391 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:16:10.135425 | orchestrator | Thursday 16 April 2026 08:15:54 +0000 (0:00:01.567) 0:30:01.301 ******** 2026-04-16 08:16:10.135436 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135447 | orchestrator | 2026-04-16 08:16:10.135458 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:16:10.135469 | orchestrator | Thursday 16 April 2026 08:15:55 +0000 (0:00:00.800) 0:30:02.101 ******** 2026-04-16 08:16:10.135480 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135491 | orchestrator | 2026-04-16 08:16:10.135501 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:16:10.135512 | orchestrator | Thursday 16 April 2026 08:15:56 +0000 (0:00:00.775) 0:30:02.876 ******** 2026-04-16 08:16:10.135523 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135534 | orchestrator | 2026-04-16 08:16:10.135550 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:16:10.135568 | orchestrator | Thursday 16 April 2026 08:15:56 +0000 (0:00:00.779) 0:30:03.656 ******** 2026-04-16 08:16:10.135587 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135605 | orchestrator | 2026-04-16 08:16:10.135622 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:16:10.135640 | orchestrator | Thursday 16 April 2026 08:15:57 +0000 (0:00:00.761) 0:30:04.418 ******** 2026-04-16 08:16:10.135657 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135674 | orchestrator | 2026-04-16 08:16:10.135694 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:16:10.135713 | orchestrator | Thursday 16 April 2026 08:15:58 +0000 (0:00:00.795) 0:30:05.213 ******** 2026-04-16 08:16:10.135750 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135775 | orchestrator | 2026-04-16 08:16:10.135786 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:16:10.135797 | orchestrator | Thursday 16 April 2026 08:15:59 +0000 (0:00:00.780) 0:30:05.994 ******** 2026-04-16 08:16:10.135816 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.135827 | orchestrator | 2026-04-16 08:16:10.135838 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:16:10.135849 | orchestrator | Thursday 16 April 2026 08:15:59 +0000 (0:00:00.739) 0:30:06.734 ******** 2026-04-16 08:16:10.135859 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135870 | orchestrator | 2026-04-16 08:16:10.135881 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:16:10.135892 | orchestrator | Thursday 16 April 2026 08:16:00 +0000 (0:00:00.776) 0:30:07.510 ******** 2026-04-16 08:16:10.135902 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135913 | orchestrator | 2026-04-16 08:16:10.135924 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:16:10.135935 | orchestrator | Thursday 16 April 2026 08:16:01 +0000 (0:00:00.760) 0:30:08.271 ******** 2026-04-16 08:16:10.135945 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:10.135956 | orchestrator | 2026-04-16 08:16:10.135967 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:16:10.135978 | orchestrator | Thursday 16 April 2026 08:16:02 +0000 (0:00:00.841) 0:30:09.112 ******** 2026-04-16 08:16:10.135989 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136041 | orchestrator | 2026-04-16 08:16:10.136052 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:16:10.136063 | orchestrator | Thursday 16 April 2026 08:16:03 +0000 (0:00:00.761) 0:30:09.874 ******** 2026-04-16 08:16:10.136074 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136085 | orchestrator | 2026-04-16 08:16:10.136095 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:16:10.136116 | orchestrator | Thursday 16 April 2026 08:16:03 +0000 (0:00:00.760) 0:30:10.634 ******** 2026-04-16 08:16:10.136127 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136137 | orchestrator | 2026-04-16 08:16:10.136148 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:16:10.136159 | orchestrator | Thursday 16 April 2026 08:16:04 +0000 (0:00:00.758) 0:30:11.393 ******** 2026-04-16 08:16:10.136169 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136180 | orchestrator | 2026-04-16 08:16:10.136191 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:16:10.136202 | orchestrator | Thursday 16 April 2026 08:16:05 +0000 (0:00:00.801) 0:30:12.195 ******** 2026-04-16 08:16:10.136212 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136223 | orchestrator | 2026-04-16 08:16:10.136234 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:16:10.136245 | orchestrator | Thursday 16 April 2026 08:16:06 +0000 (0:00:00.746) 0:30:12.942 ******** 2026-04-16 08:16:10.136255 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136266 | orchestrator | 2026-04-16 08:16:10.136277 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:16:10.136288 | orchestrator | Thursday 16 April 2026 08:16:06 +0000 (0:00:00.752) 0:30:13.694 ******** 2026-04-16 08:16:10.136299 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136309 | orchestrator | 2026-04-16 08:16:10.136320 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:16:10.136331 | orchestrator | Thursday 16 April 2026 08:16:07 +0000 (0:00:00.773) 0:30:14.467 ******** 2026-04-16 08:16:10.136341 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136352 | orchestrator | 2026-04-16 08:16:10.136363 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:16:10.136374 | orchestrator | Thursday 16 April 2026 08:16:08 +0000 (0:00:00.787) 0:30:15.255 ******** 2026-04-16 08:16:10.136384 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136395 | orchestrator | 2026-04-16 08:16:10.136406 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:16:10.136416 | orchestrator | Thursday 16 April 2026 08:16:09 +0000 (0:00:00.760) 0:30:16.016 ******** 2026-04-16 08:16:10.136427 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136438 | orchestrator | 2026-04-16 08:16:10.136449 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:16:10.136460 | orchestrator | Thursday 16 April 2026 08:16:10 +0000 (0:00:00.742) 0:30:16.758 ******** 2026-04-16 08:16:10.136471 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:10.136482 | orchestrator | 2026-04-16 08:16:10.136502 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:16:54.748458 | orchestrator | Thursday 16 April 2026 08:16:10 +0000 (0:00:00.822) 0:30:17.581 ******** 2026-04-16 08:16:54.748569 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.748583 | orchestrator | 2026-04-16 08:16:54.748594 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:16:54.748603 | orchestrator | Thursday 16 April 2026 08:16:11 +0000 (0:00:00.753) 0:30:18.335 ******** 2026-04-16 08:16:54.748612 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.748622 | orchestrator | 2026-04-16 08:16:54.748631 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:16:54.748640 | orchestrator | Thursday 16 April 2026 08:16:13 +0000 (0:00:01.612) 0:30:19.947 ******** 2026-04-16 08:16:54.748648 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.748657 | orchestrator | 2026-04-16 08:16:54.748666 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:16:54.748675 | orchestrator | Thursday 16 April 2026 08:16:15 +0000 (0:00:02.108) 0:30:22.056 ******** 2026-04-16 08:16:54.748683 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-04-16 08:16:54.748715 | orchestrator | 2026-04-16 08:16:54.748725 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:16:54.748733 | orchestrator | Thursday 16 April 2026 08:16:16 +0000 (0:00:01.091) 0:30:23.147 ******** 2026-04-16 08:16:54.748742 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.748750 | orchestrator | 2026-04-16 08:16:54.748759 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:16:54.748768 | orchestrator | Thursday 16 April 2026 08:16:17 +0000 (0:00:01.103) 0:30:24.250 ******** 2026-04-16 08:16:54.748776 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.748785 | orchestrator | 2026-04-16 08:16:54.748807 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:16:54.748816 | orchestrator | Thursday 16 April 2026 08:16:18 +0000 (0:00:01.101) 0:30:25.352 ******** 2026-04-16 08:16:54.748825 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:16:54.748834 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:16:54.748844 | orchestrator | 2026-04-16 08:16:54.748853 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:16:54.748861 | orchestrator | Thursday 16 April 2026 08:16:20 +0000 (0:00:01.896) 0:30:27.248 ******** 2026-04-16 08:16:54.748870 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.748878 | orchestrator | 2026-04-16 08:16:54.748887 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:16:54.748896 | orchestrator | Thursday 16 April 2026 08:16:21 +0000 (0:00:01.473) 0:30:28.722 ******** 2026-04-16 08:16:54.748904 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.748913 | orchestrator | 2026-04-16 08:16:54.748922 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:16:54.748930 | orchestrator | Thursday 16 April 2026 08:16:23 +0000 (0:00:01.172) 0:30:29.894 ******** 2026-04-16 08:16:54.748939 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.748947 | orchestrator | 2026-04-16 08:16:54.748956 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:16:54.748965 | orchestrator | Thursday 16 April 2026 08:16:23 +0000 (0:00:00.770) 0:30:30.664 ******** 2026-04-16 08:16:54.748973 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.748984 | orchestrator | 2026-04-16 08:16:54.748994 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:16:54.749022 | orchestrator | Thursday 16 April 2026 08:16:24 +0000 (0:00:00.756) 0:30:31.421 ******** 2026-04-16 08:16:54.749033 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-04-16 08:16:54.749043 | orchestrator | 2026-04-16 08:16:54.749052 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:16:54.749063 | orchestrator | Thursday 16 April 2026 08:16:25 +0000 (0:00:01.122) 0:30:32.543 ******** 2026-04-16 08:16:54.749073 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.749083 | orchestrator | 2026-04-16 08:16:54.749093 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:16:54.749103 | orchestrator | Thursday 16 April 2026 08:16:27 +0000 (0:00:01.716) 0:30:34.260 ******** 2026-04-16 08:16:54.749113 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:16:54.749124 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:16:54.749134 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:16:54.749144 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749153 | orchestrator | 2026-04-16 08:16:54.749163 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:16:54.749173 | orchestrator | Thursday 16 April 2026 08:16:28 +0000 (0:00:01.142) 0:30:35.402 ******** 2026-04-16 08:16:54.749183 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749194 | orchestrator | 2026-04-16 08:16:54.749204 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:16:54.749219 | orchestrator | Thursday 16 April 2026 08:16:29 +0000 (0:00:01.098) 0:30:36.501 ******** 2026-04-16 08:16:54.749228 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749237 | orchestrator | 2026-04-16 08:16:54.749245 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:16:54.749254 | orchestrator | Thursday 16 April 2026 08:16:30 +0000 (0:00:01.141) 0:30:37.643 ******** 2026-04-16 08:16:54.749263 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749271 | orchestrator | 2026-04-16 08:16:54.749280 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:16:54.749288 | orchestrator | Thursday 16 April 2026 08:16:32 +0000 (0:00:01.133) 0:30:38.776 ******** 2026-04-16 08:16:54.749297 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749306 | orchestrator | 2026-04-16 08:16:54.749329 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:16:54.749339 | orchestrator | Thursday 16 April 2026 08:16:33 +0000 (0:00:01.123) 0:30:39.900 ******** 2026-04-16 08:16:54.749347 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749356 | orchestrator | 2026-04-16 08:16:54.749366 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:16:54.749375 | orchestrator | Thursday 16 April 2026 08:16:33 +0000 (0:00:00.763) 0:30:40.664 ******** 2026-04-16 08:16:54.749383 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.749392 | orchestrator | 2026-04-16 08:16:54.749401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:16:54.749409 | orchestrator | Thursday 16 April 2026 08:16:36 +0000 (0:00:02.315) 0:30:42.980 ******** 2026-04-16 08:16:54.749418 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.749427 | orchestrator | 2026-04-16 08:16:54.749435 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:16:54.749444 | orchestrator | Thursday 16 April 2026 08:16:37 +0000 (0:00:00.782) 0:30:43.762 ******** 2026-04-16 08:16:54.749453 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-04-16 08:16:54.749461 | orchestrator | 2026-04-16 08:16:54.749470 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:16:54.749479 | orchestrator | Thursday 16 April 2026 08:16:38 +0000 (0:00:01.116) 0:30:44.878 ******** 2026-04-16 08:16:54.749487 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749496 | orchestrator | 2026-04-16 08:16:54.749505 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:16:54.749514 | orchestrator | Thursday 16 April 2026 08:16:39 +0000 (0:00:01.113) 0:30:45.991 ******** 2026-04-16 08:16:54.749527 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749536 | orchestrator | 2026-04-16 08:16:54.749545 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:16:54.749553 | orchestrator | Thursday 16 April 2026 08:16:40 +0000 (0:00:01.165) 0:30:47.157 ******** 2026-04-16 08:16:54.749562 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749571 | orchestrator | 2026-04-16 08:16:54.749579 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:16:54.749588 | orchestrator | Thursday 16 April 2026 08:16:41 +0000 (0:00:01.110) 0:30:48.267 ******** 2026-04-16 08:16:54.749597 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749605 | orchestrator | 2026-04-16 08:16:54.749614 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:16:54.749622 | orchestrator | Thursday 16 April 2026 08:16:42 +0000 (0:00:01.106) 0:30:49.373 ******** 2026-04-16 08:16:54.749631 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749640 | orchestrator | 2026-04-16 08:16:54.749649 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:16:54.749657 | orchestrator | Thursday 16 April 2026 08:16:43 +0000 (0:00:00.946) 0:30:50.320 ******** 2026-04-16 08:16:54.749666 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749682 | orchestrator | 2026-04-16 08:16:54.749691 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:16:54.749700 | orchestrator | Thursday 16 April 2026 08:16:44 +0000 (0:00:01.098) 0:30:51.419 ******** 2026-04-16 08:16:54.749708 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749717 | orchestrator | 2026-04-16 08:16:54.749726 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:16:54.749734 | orchestrator | Thursday 16 April 2026 08:16:45 +0000 (0:00:01.098) 0:30:52.517 ******** 2026-04-16 08:16:54.749743 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:16:54.749752 | orchestrator | 2026-04-16 08:16:54.749760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:16:54.749769 | orchestrator | Thursday 16 April 2026 08:16:46 +0000 (0:00:01.092) 0:30:53.610 ******** 2026-04-16 08:16:54.749778 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:16:54.749787 | orchestrator | 2026-04-16 08:16:54.749795 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:16:54.749804 | orchestrator | Thursday 16 April 2026 08:16:47 +0000 (0:00:00.757) 0:30:54.368 ******** 2026-04-16 08:16:54.749813 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-04-16 08:16:54.749821 | orchestrator | 2026-04-16 08:16:54.749830 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:16:54.749839 | orchestrator | Thursday 16 April 2026 08:16:48 +0000 (0:00:01.083) 0:30:55.452 ******** 2026-04-16 08:16:54.749847 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-04-16 08:16:54.749856 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-04-16 08:16:54.749865 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-04-16 08:16:54.749874 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-04-16 08:16:54.749882 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-04-16 08:16:54.749891 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-04-16 08:16:54.749899 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-04-16 08:16:54.749908 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:16:54.749918 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:16:54.749927 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:16:54.749936 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:16:54.749944 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:16:54.749953 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:16:54.749962 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:16:54.749970 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-04-16 08:16:54.749979 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-04-16 08:16:54.749988 | orchestrator | 2026-04-16 08:16:54.750125 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:17:35.317563 | orchestrator | Thursday 16 April 2026 08:16:55 +0000 (0:00:06.697) 0:31:02.150 ******** 2026-04-16 08:17:35.317659 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317671 | orchestrator | 2026-04-16 08:17:35.317681 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:17:35.317690 | orchestrator | Thursday 16 April 2026 08:16:56 +0000 (0:00:00.767) 0:31:02.917 ******** 2026-04-16 08:17:35.317698 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317706 | orchestrator | 2026-04-16 08:17:35.317715 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:17:35.317723 | orchestrator | Thursday 16 April 2026 08:16:56 +0000 (0:00:00.805) 0:31:03.723 ******** 2026-04-16 08:17:35.317731 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317739 | orchestrator | 2026-04-16 08:17:35.317747 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:17:35.317777 | orchestrator | Thursday 16 April 2026 08:16:57 +0000 (0:00:00.748) 0:31:04.471 ******** 2026-04-16 08:17:35.317786 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317793 | orchestrator | 2026-04-16 08:17:35.317801 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:17:35.317809 | orchestrator | Thursday 16 April 2026 08:16:58 +0000 (0:00:00.808) 0:31:05.280 ******** 2026-04-16 08:17:35.317820 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317833 | orchestrator | 2026-04-16 08:17:35.317842 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:17:35.317849 | orchestrator | Thursday 16 April 2026 08:16:59 +0000 (0:00:00.784) 0:31:06.065 ******** 2026-04-16 08:17:35.317857 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317865 | orchestrator | 2026-04-16 08:17:35.317885 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:17:35.317894 | orchestrator | Thursday 16 April 2026 08:17:00 +0000 (0:00:00.746) 0:31:06.811 ******** 2026-04-16 08:17:35.317902 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317910 | orchestrator | 2026-04-16 08:17:35.317917 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:17:35.317925 | orchestrator | Thursday 16 April 2026 08:17:00 +0000 (0:00:00.774) 0:31:07.586 ******** 2026-04-16 08:17:35.317933 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317941 | orchestrator | 2026-04-16 08:17:35.317949 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:17:35.317957 | orchestrator | Thursday 16 April 2026 08:17:01 +0000 (0:00:00.748) 0:31:08.334 ******** 2026-04-16 08:17:35.317965 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.317973 | orchestrator | 2026-04-16 08:17:35.317980 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:17:35.317988 | orchestrator | Thursday 16 April 2026 08:17:02 +0000 (0:00:00.773) 0:31:09.108 ******** 2026-04-16 08:17:35.317996 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318004 | orchestrator | 2026-04-16 08:17:35.318112 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:17:35.318122 | orchestrator | Thursday 16 April 2026 08:17:03 +0000 (0:00:00.759) 0:31:09.867 ******** 2026-04-16 08:17:35.318131 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318140 | orchestrator | 2026-04-16 08:17:35.318149 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:17:35.318159 | orchestrator | Thursday 16 April 2026 08:17:03 +0000 (0:00:00.754) 0:31:10.621 ******** 2026-04-16 08:17:35.318168 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318177 | orchestrator | 2026-04-16 08:17:35.318186 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:17:35.318197 | orchestrator | Thursday 16 April 2026 08:17:04 +0000 (0:00:00.760) 0:31:11.382 ******** 2026-04-16 08:17:35.318211 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318220 | orchestrator | 2026-04-16 08:17:35.318229 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:17:35.318238 | orchestrator | Thursday 16 April 2026 08:17:05 +0000 (0:00:00.888) 0:31:12.271 ******** 2026-04-16 08:17:35.318246 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318255 | orchestrator | 2026-04-16 08:17:35.318264 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:17:35.318272 | orchestrator | Thursday 16 April 2026 08:17:06 +0000 (0:00:00.772) 0:31:13.044 ******** 2026-04-16 08:17:35.318281 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318290 | orchestrator | 2026-04-16 08:17:35.318299 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:17:35.318313 | orchestrator | Thursday 16 April 2026 08:17:07 +0000 (0:00:00.844) 0:31:13.888 ******** 2026-04-16 08:17:35.318323 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318339 | orchestrator | 2026-04-16 08:17:35.318349 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:17:35.318358 | orchestrator | Thursday 16 April 2026 08:17:07 +0000 (0:00:00.826) 0:31:14.715 ******** 2026-04-16 08:17:35.318367 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318375 | orchestrator | 2026-04-16 08:17:35.318384 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:17:35.318395 | orchestrator | Thursday 16 April 2026 08:17:08 +0000 (0:00:00.769) 0:31:15.484 ******** 2026-04-16 08:17:35.318404 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318412 | orchestrator | 2026-04-16 08:17:35.318419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:17:35.318427 | orchestrator | Thursday 16 April 2026 08:17:09 +0000 (0:00:00.783) 0:31:16.268 ******** 2026-04-16 08:17:35.318435 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318443 | orchestrator | 2026-04-16 08:17:35.318450 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:17:35.318458 | orchestrator | Thursday 16 April 2026 08:17:10 +0000 (0:00:00.789) 0:31:17.057 ******** 2026-04-16 08:17:35.318466 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318474 | orchestrator | 2026-04-16 08:17:35.318496 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:17:35.318504 | orchestrator | Thursday 16 April 2026 08:17:11 +0000 (0:00:00.771) 0:31:17.828 ******** 2026-04-16 08:17:35.318512 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318520 | orchestrator | 2026-04-16 08:17:35.318528 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:17:35.318536 | orchestrator | Thursday 16 April 2026 08:17:11 +0000 (0:00:00.753) 0:31:18.582 ******** 2026-04-16 08:17:35.318544 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:17:35.318552 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:17:35.318560 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:17:35.318568 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318576 | orchestrator | 2026-04-16 08:17:35.318584 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:17:35.318592 | orchestrator | Thursday 16 April 2026 08:17:12 +0000 (0:00:01.116) 0:31:19.698 ******** 2026-04-16 08:17:35.318600 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:17:35.318608 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:17:35.318621 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:17:35.318631 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318639 | orchestrator | 2026-04-16 08:17:35.318647 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:17:35.318655 | orchestrator | Thursday 16 April 2026 08:17:14 +0000 (0:00:01.081) 0:31:20.779 ******** 2026-04-16 08:17:35.318667 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-04-16 08:17:35.318675 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-04-16 08:17:35.318683 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-04-16 08:17:35.318691 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318699 | orchestrator | 2026-04-16 08:17:35.318707 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:17:35.318714 | orchestrator | Thursday 16 April 2026 08:17:15 +0000 (0:00:01.051) 0:31:21.831 ******** 2026-04-16 08:17:35.318722 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318730 | orchestrator | 2026-04-16 08:17:35.318738 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:17:35.318746 | orchestrator | Thursday 16 April 2026 08:17:15 +0000 (0:00:00.787) 0:31:22.618 ******** 2026-04-16 08:17:35.318754 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-04-16 08:17:35.318767 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318775 | orchestrator | 2026-04-16 08:17:35.318783 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:17:35.318791 | orchestrator | Thursday 16 April 2026 08:17:16 +0000 (0:00:00.884) 0:31:23.502 ******** 2026-04-16 08:17:35.318798 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:17:35.318806 | orchestrator | 2026-04-16 08:17:35.318814 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:17:35.318822 | orchestrator | Thursday 16 April 2026 08:17:18 +0000 (0:00:01.434) 0:31:24.937 ******** 2026-04-16 08:17:35.318830 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:17:35.318838 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:17:35.318846 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-04-16 08:17:35.318854 | orchestrator | 2026-04-16 08:17:35.318861 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-04-16 08:17:35.318869 | orchestrator | Thursday 16 April 2026 08:17:19 +0000 (0:00:01.567) 0:31:26.505 ******** 2026-04-16 08:17:35.318877 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-04-16 08:17:35.318885 | orchestrator | 2026-04-16 08:17:35.318893 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-04-16 08:17:35.318901 | orchestrator | Thursday 16 April 2026 08:17:20 +0000 (0:00:01.098) 0:31:27.603 ******** 2026-04-16 08:17:35.318908 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:17:35.318916 | orchestrator | 2026-04-16 08:17:35.318924 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-04-16 08:17:35.318932 | orchestrator | Thursday 16 April 2026 08:17:22 +0000 (0:00:01.486) 0:31:29.090 ******** 2026-04-16 08:17:35.318940 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:17:35.318947 | orchestrator | 2026-04-16 08:17:35.318955 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-04-16 08:17:35.318963 | orchestrator | Thursday 16 April 2026 08:17:23 +0000 (0:00:01.096) 0:31:30.186 ******** 2026-04-16 08:17:35.318971 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:17:35.318979 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:17:35.318987 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:17:35.318994 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-04-16 08:17:35.319002 | orchestrator | 2026-04-16 08:17:35.319045 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-04-16 08:17:35.319053 | orchestrator | Thursday 16 April 2026 08:17:30 +0000 (0:00:07.421) 0:31:37.607 ******** 2026-04-16 08:17:35.319061 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:17:35.319069 | orchestrator | 2026-04-16 08:17:35.319077 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-04-16 08:17:35.319085 | orchestrator | Thursday 16 April 2026 08:17:32 +0000 (0:00:01.190) 0:31:38.798 ******** 2026-04-16 08:17:35.319095 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 08:17:35.319108 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-16 08:17:35.319116 | orchestrator | 2026-04-16 08:17:35.319124 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:17:35.319137 | orchestrator | Thursday 16 April 2026 08:17:35 +0000 (0:00:03.264) 0:31:42.063 ******** 2026-04-16 08:18:17.737296 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 08:18:17.737416 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-04-16 08:18:17.737438 | orchestrator | 2026-04-16 08:18:17.737456 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-04-16 08:18:17.737474 | orchestrator | Thursday 16 April 2026 08:17:37 +0000 (0:00:02.085) 0:31:44.148 ******** 2026-04-16 08:18:17.737488 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:18:17.737505 | orchestrator | 2026-04-16 08:18:17.737552 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-04-16 08:18:17.737569 | orchestrator | Thursday 16 April 2026 08:17:38 +0000 (0:00:01.536) 0:31:45.685 ******** 2026-04-16 08:18:17.737586 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:18:17.737602 | orchestrator | 2026-04-16 08:18:17.737616 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-04-16 08:18:17.737633 | orchestrator | Thursday 16 April 2026 08:17:39 +0000 (0:00:00.779) 0:31:46.465 ******** 2026-04-16 08:18:17.737649 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:18:17.737664 | orchestrator | 2026-04-16 08:18:17.737674 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-04-16 08:18:17.737684 | orchestrator | Thursday 16 April 2026 08:17:40 +0000 (0:00:00.773) 0:31:47.238 ******** 2026-04-16 08:18:17.737694 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-04-16 08:18:17.737704 | orchestrator | 2026-04-16 08:18:17.737713 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-04-16 08:18:17.737723 | orchestrator | Thursday 16 April 2026 08:17:41 +0000 (0:00:01.158) 0:31:48.396 ******** 2026-04-16 08:18:17.737747 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:18:17.737757 | orchestrator | 2026-04-16 08:18:17.737767 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-04-16 08:18:17.737776 | orchestrator | Thursday 16 April 2026 08:17:42 +0000 (0:00:01.119) 0:31:49.516 ******** 2026-04-16 08:18:17.737786 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:18:17.737795 | orchestrator | 2026-04-16 08:18:17.737805 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-04-16 08:18:17.737815 | orchestrator | Thursday 16 April 2026 08:17:43 +0000 (0:00:01.144) 0:31:50.661 ******** 2026-04-16 08:18:17.737824 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-04-16 08:18:17.737836 | orchestrator | 2026-04-16 08:18:17.737846 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-04-16 08:18:17.737857 | orchestrator | Thursday 16 April 2026 08:17:45 +0000 (0:00:01.135) 0:31:51.797 ******** 2026-04-16 08:18:17.737868 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:18:17.737878 | orchestrator | 2026-04-16 08:18:17.737888 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-04-16 08:18:17.737899 | orchestrator | Thursday 16 April 2026 08:17:47 +0000 (0:00:02.024) 0:31:53.822 ******** 2026-04-16 08:18:17.737910 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:18:17.737921 | orchestrator | 2026-04-16 08:18:17.737931 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-04-16 08:18:17.737942 | orchestrator | Thursday 16 April 2026 08:17:48 +0000 (0:00:01.930) 0:31:55.753 ******** 2026-04-16 08:18:17.737953 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:18:17.737963 | orchestrator | 2026-04-16 08:18:17.737974 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-04-16 08:18:17.737985 | orchestrator | Thursday 16 April 2026 08:17:51 +0000 (0:00:02.454) 0:31:58.207 ******** 2026-04-16 08:18:17.737997 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:18:17.738007 | orchestrator | 2026-04-16 08:18:17.738099 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-04-16 08:18:17.738111 | orchestrator | Thursday 16 April 2026 08:17:55 +0000 (0:00:03.758) 0:32:01.966 ******** 2026-04-16 08:18:17.738122 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-04-16 08:18:17.738135 | orchestrator | 2026-04-16 08:18:17.738147 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-04-16 08:18:17.738159 | orchestrator | Thursday 16 April 2026 08:17:56 +0000 (0:00:01.456) 0:32:03.423 ******** 2026-04-16 08:18:17.738172 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:18:17.738184 | orchestrator | 2026-04-16 08:18:17.738198 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-04-16 08:18:17.738227 | orchestrator | Thursday 16 April 2026 08:17:59 +0000 (0:00:02.397) 0:32:05.821 ******** 2026-04-16 08:18:17.738267 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:18:17.738289 | orchestrator | 2026-04-16 08:18:17.738307 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-04-16 08:18:17.738325 | orchestrator | Thursday 16 April 2026 08:18:01 +0000 (0:00:02.641) 0:32:08.463 ******** 2026-04-16 08:18:17.738342 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:18:17.738362 | orchestrator | 2026-04-16 08:18:17.738379 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-04-16 08:18:17.738398 | orchestrator | Thursday 16 April 2026 08:18:03 +0000 (0:00:01.853) 0:32:10.316 ******** 2026-04-16 08:18:17.738416 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:18:17.738434 | orchestrator | 2026-04-16 08:18:17.738452 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-04-16 08:18:17.738467 | orchestrator | Thursday 16 April 2026 08:18:04 +0000 (0:00:01.117) 0:32:11.433 ******** 2026-04-16 08:18:17.738478 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-16 08:18:17.738489 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-16 08:18:17.738500 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:18:17.738511 | orchestrator | 2026-04-16 08:18:17.738522 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-04-16 08:18:17.738532 | orchestrator | Thursday 16 April 2026 08:18:05 +0000 (0:00:01.309) 0:32:12.743 ******** 2026-04-16 08:18:17.738543 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-04-16 08:18:17.738554 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-04-16 08:18:17.738587 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-04-16 08:18:17.738598 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-04-16 08:18:17.738609 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:18:17.738619 | orchestrator | 2026-04-16 08:18:17.738630 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-04-16 08:18:17.738641 | orchestrator | 2026-04-16 08:18:17.738651 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:18:17.738662 | orchestrator | Thursday 16 April 2026 08:18:07 +0000 (0:00:01.857) 0:32:14.600 ******** 2026-04-16 08:18:17.738673 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:18:17.738684 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:18:17.738694 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:18:17.738705 | orchestrator | 2026-04-16 08:18:17.738716 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:18:17.738726 | orchestrator | Thursday 16 April 2026 08:18:09 +0000 (0:00:01.711) 0:32:16.312 ******** 2026-04-16 08:18:17.738737 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:18:17.738748 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:18:17.738758 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:18:17.738769 | orchestrator | 2026-04-16 08:18:17.738780 | orchestrator | TASK [Get pool list] *********************************************************** 2026-04-16 08:18:17.738791 | orchestrator | Thursday 16 April 2026 08:18:11 +0000 (0:00:01.649) 0:32:17.961 ******** 2026-04-16 08:18:17.738801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:18:17.738812 | orchestrator | 2026-04-16 08:18:17.738823 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-04-16 08:18:17.738841 | orchestrator | Thursday 16 April 2026 08:18:14 +0000 (0:00:03.046) 0:32:21.008 ******** 2026-04-16 08:18:17.738852 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:18:17.738863 | orchestrator | 2026-04-16 08:18:17.738874 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-04-16 08:18:17.738884 | orchestrator | Thursday 16 April 2026 08:18:17 +0000 (0:00:03.172) 0:32:24.181 ******** 2026-04-16 08:18:17.738901 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-04-16T05:49:47.869668+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:17.738945 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-04-16T05:50:54.058385+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '34', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.184794 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-04-16T05:50:57.571074+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.184939 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-04-16T05:51:54.773880+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '68', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.184956 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-04-16T05:52:01.149127+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.184978 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-04-16T05:52:07.393239+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '70', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 2.25, 'score_stable': 2.25, 'optimal_score': 1, 'raw_score_acting': 2.25, 'raw_score_stable': 2.25, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.184998 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-04-16T05:52:13.512582+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '184', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '72', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.897549 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-04-16T05:52:19.583944+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '72', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.897626 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-04-16T05:52:31.415236+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '75', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.897666 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-04-16T05:53:13.812537+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '82', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 82, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.897674 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-04-16T05:53:22.603451+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '92', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 92, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.8799999952316284, 'score_stable': 1.8799999952316284, 'optimal_score': 1, 'raw_score_acting': 1.8799999952316284, 'raw_score_stable': 1.8799999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:18:18.897728 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-04-16T05:53:31.692252+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '194', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 194, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:19:55.718677 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-04-16T05:53:41.600017+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '108', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 108, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:19:55.718865 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-04-16T05:53:49.674407+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '117', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 117, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-04-16 08:19:55.718902 | orchestrator | 2026-04-16 08:19:55.718948 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-04-16 08:19:55.719060 | orchestrator | Thursday 16 April 2026 08:18:20 +0000 (0:00:02.772) 0:32:26.953 ******** 2026-04-16 08:19:55.719088 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:19:55.719108 | orchestrator | 2026-04-16 08:19:55.719128 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-04-16 08:19:55.719149 | orchestrator | Thursday 16 April 2026 08:18:23 +0000 (0:00:03.046) 0:32:30.000 ******** 2026-04-16 08:19:55.719170 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-16 08:19:55.719192 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-16 08:19:55.719212 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-16 08:19:55.719232 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-16 08:19:55.719254 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-16 08:19:55.719275 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-16 08:19:55.719295 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-16 08:19:55.719316 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-16 08:19:55.719336 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-16 08:19:55.719357 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-16 08:19:55.719378 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-16 08:19:55.719398 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-16 08:19:55.719417 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-16 08:19:55.719437 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-16 08:19:55.719457 | orchestrator | 2026-04-16 08:19:55.719477 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-04-16 08:19:55.719496 | orchestrator | Thursday 16 April 2026 08:19:39 +0000 (0:01:16.471) 0:33:46.471 ******** 2026-04-16 08:19:55.719517 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-16 08:19:55.719536 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-16 08:19:55.719555 | orchestrator | 2026-04-16 08:19:55.719576 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-16 08:19:55.719596 | orchestrator | 2026-04-16 08:19:55.719615 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:19:55.719635 | orchestrator | Thursday 16 April 2026 08:19:45 +0000 (0:00:06.125) 0:33:52.597 ******** 2026-04-16 08:19:55.719654 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-16 08:19:55.719674 | orchestrator | 2026-04-16 08:19:55.719695 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:19:55.719714 | orchestrator | Thursday 16 April 2026 08:19:47 +0000 (0:00:01.187) 0:33:53.784 ******** 2026-04-16 08:19:55.719733 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.719753 | orchestrator | 2026-04-16 08:19:55.719773 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:19:55.719793 | orchestrator | Thursday 16 April 2026 08:19:48 +0000 (0:00:01.492) 0:33:55.277 ******** 2026-04-16 08:19:55.719812 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.719832 | orchestrator | 2026-04-16 08:19:55.719865 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:19:55.719886 | orchestrator | Thursday 16 April 2026 08:19:49 +0000 (0:00:01.090) 0:33:56.367 ******** 2026-04-16 08:19:55.719906 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.719925 | orchestrator | 2026-04-16 08:19:55.719945 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:19:55.719964 | orchestrator | Thursday 16 April 2026 08:19:51 +0000 (0:00:01.450) 0:33:57.818 ******** 2026-04-16 08:19:55.719984 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.720003 | orchestrator | 2026-04-16 08:19:55.720024 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:19:55.720072 | orchestrator | Thursday 16 April 2026 08:19:52 +0000 (0:00:01.119) 0:33:58.938 ******** 2026-04-16 08:19:55.720092 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.720110 | orchestrator | 2026-04-16 08:19:55.720145 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:19:55.720179 | orchestrator | Thursday 16 April 2026 08:19:53 +0000 (0:00:01.122) 0:34:00.060 ******** 2026-04-16 08:19:55.720197 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.720215 | orchestrator | 2026-04-16 08:19:55.720232 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:19:55.720251 | orchestrator | Thursday 16 April 2026 08:19:54 +0000 (0:00:01.132) 0:34:01.193 ******** 2026-04-16 08:19:55.720270 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:19:55.720289 | orchestrator | 2026-04-16 08:19:55.720307 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:19:55.720325 | orchestrator | Thursday 16 April 2026 08:19:55 +0000 (0:00:01.137) 0:34:02.330 ******** 2026-04-16 08:19:55.720352 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:19:55.720372 | orchestrator | 2026-04-16 08:19:55.720402 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:20:20.337941 | orchestrator | Thursday 16 April 2026 08:19:56 +0000 (0:00:01.113) 0:34:03.444 ******** 2026-04-16 08:20:20.338267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:20:20.338306 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:20:20.338328 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:20:20.338349 | orchestrator | 2026-04-16 08:20:20.338370 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:20:20.338390 | orchestrator | Thursday 16 April 2026 08:19:58 +0000 (0:00:01.975) 0:34:05.420 ******** 2026-04-16 08:20:20.338412 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:20.338426 | orchestrator | 2026-04-16 08:20:20.338495 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:20:20.338510 | orchestrator | Thursday 16 April 2026 08:19:59 +0000 (0:00:01.245) 0:34:06.665 ******** 2026-04-16 08:20:20.338522 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:20:20.338535 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:20:20.338548 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:20:20.338561 | orchestrator | 2026-04-16 08:20:20.338574 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:20:20.338586 | orchestrator | Thursday 16 April 2026 08:20:03 +0000 (0:00:03.151) 0:34:09.817 ******** 2026-04-16 08:20:20.338600 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 08:20:20.338612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 08:20:20.338625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 08:20:20.338637 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.338651 | orchestrator | 2026-04-16 08:20:20.338663 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:20:20.338711 | orchestrator | Thursday 16 April 2026 08:20:04 +0000 (0:00:01.672) 0:34:11.489 ******** 2026-04-16 08:20:20.338734 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:20:20.338755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:20:20.338776 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:20:20.338794 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.338813 | orchestrator | 2026-04-16 08:20:20.338834 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:20:20.338854 | orchestrator | Thursday 16 April 2026 08:20:06 +0000 (0:00:01.912) 0:34:13.402 ******** 2026-04-16 08:20:20.338875 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:20.338897 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:20.338917 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:20.338939 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.338960 | orchestrator | 2026-04-16 08:20:20.338979 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:20:20.338999 | orchestrator | Thursday 16 April 2026 08:20:07 +0000 (0:00:01.133) 0:34:14.535 ******** 2026-04-16 08:20:20.339097 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:20:00.408216', 'end': '2026-04-16 08:20:00.476325', 'delta': '0:00:00.068109', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:20:20.339124 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:20:01.291031', 'end': '2026-04-16 08:20:01.337564', 'delta': '0:00:00.046533', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:20:20.339159 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:20:01.873491', 'end': '2026-04-16 08:20:01.918845', 'delta': '0:00:00.045354', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:20:20.339178 | orchestrator | 2026-04-16 08:20:20.339198 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:20:20.339217 | orchestrator | Thursday 16 April 2026 08:20:09 +0000 (0:00:01.224) 0:34:15.760 ******** 2026-04-16 08:20:20.339235 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:20.339253 | orchestrator | 2026-04-16 08:20:20.339270 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:20:20.339287 | orchestrator | Thursday 16 April 2026 08:20:10 +0000 (0:00:01.236) 0:34:16.996 ******** 2026-04-16 08:20:20.339305 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.339323 | orchestrator | 2026-04-16 08:20:20.339343 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:20:20.339361 | orchestrator | Thursday 16 April 2026 08:20:11 +0000 (0:00:01.269) 0:34:18.266 ******** 2026-04-16 08:20:20.339379 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:20.339398 | orchestrator | 2026-04-16 08:20:20.339417 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:20:20.339435 | orchestrator | Thursday 16 April 2026 08:20:12 +0000 (0:00:01.094) 0:34:19.361 ******** 2026-04-16 08:20:20.339452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:20:20.339470 | orchestrator | 2026-04-16 08:20:20.339487 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:20:20.339506 | orchestrator | Thursday 16 April 2026 08:20:14 +0000 (0:00:01.911) 0:34:21.273 ******** 2026-04-16 08:20:20.339525 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:20.339543 | orchestrator | 2026-04-16 08:20:20.339563 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:20:20.339583 | orchestrator | Thursday 16 April 2026 08:20:15 +0000 (0:00:01.112) 0:34:22.385 ******** 2026-04-16 08:20:20.339671 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.339682 | orchestrator | 2026-04-16 08:20:20.339693 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:20:20.339704 | orchestrator | Thursday 16 April 2026 08:20:16 +0000 (0:00:01.091) 0:34:23.477 ******** 2026-04-16 08:20:20.339715 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.339726 | orchestrator | 2026-04-16 08:20:20.339736 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:20:20.339747 | orchestrator | Thursday 16 April 2026 08:20:17 +0000 (0:00:01.221) 0:34:24.699 ******** 2026-04-16 08:20:20.339758 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.339769 | orchestrator | 2026-04-16 08:20:20.339780 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:20:20.339791 | orchestrator | Thursday 16 April 2026 08:20:19 +0000 (0:00:01.181) 0:34:25.880 ******** 2026-04-16 08:20:20.339802 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:20.339813 | orchestrator | 2026-04-16 08:20:20.339843 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:20:20.339855 | orchestrator | Thursday 16 April 2026 08:20:20 +0000 (0:00:01.100) 0:34:26.981 ******** 2026-04-16 08:20:20.339880 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:26.136303 | orchestrator | 2026-04-16 08:20:26.136417 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:20:26.136436 | orchestrator | Thursday 16 April 2026 08:20:21 +0000 (0:00:01.200) 0:34:28.181 ******** 2026-04-16 08:20:26.136448 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:26.136460 | orchestrator | 2026-04-16 08:20:26.136472 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:20:26.136483 | orchestrator | Thursday 16 April 2026 08:20:22 +0000 (0:00:01.084) 0:34:29.266 ******** 2026-04-16 08:20:26.136494 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:26.136506 | orchestrator | 2026-04-16 08:20:26.136517 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:20:26.136528 | orchestrator | Thursday 16 April 2026 08:20:23 +0000 (0:00:01.130) 0:34:30.397 ******** 2026-04-16 08:20:26.136539 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:26.136550 | orchestrator | 2026-04-16 08:20:26.136561 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:20:26.136573 | orchestrator | Thursday 16 April 2026 08:20:24 +0000 (0:00:01.110) 0:34:31.508 ******** 2026-04-16 08:20:26.136584 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:20:26.136594 | orchestrator | 2026-04-16 08:20:26.136605 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:20:26.136616 | orchestrator | Thursday 16 April 2026 08:20:25 +0000 (0:00:01.166) 0:34:32.675 ******** 2026-04-16 08:20:26.136629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:26.136646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}})  2026-04-16 08:20:26.136661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:20:26.136673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}})  2026-04-16 08:20:26.136727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:26.136759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:26.136772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:20:26.136784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:26.136796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:20:26.136807 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:26.136818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}})  2026-04-16 08:20:26.136841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}})  2026-04-16 08:20:26.136868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:27.422770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:20:27.422867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:27.422883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:20:27.422915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:20:27.422927 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:20:27.422938 | orchestrator | 2026-04-16 08:20:27.422947 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:20:27.422957 | orchestrator | Thursday 16 April 2026 08:20:27 +0000 (0:00:01.351) 0:34:34.027 ******** 2026-04-16 08:20:27.422998 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.423012 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.423023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.423089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.423108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.423130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540615 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540668 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540682 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540692 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540712 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:20:27.540727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:21:05.504355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:21:05.504478 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.504497 | orchestrator | 2026-04-16 08:21:05.504509 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:21:05.504522 | orchestrator | Thursday 16 April 2026 08:20:28 +0000 (0:00:01.394) 0:34:35.422 ******** 2026-04-16 08:21:05.504534 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.504546 | orchestrator | 2026-04-16 08:21:05.504557 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:21:05.504568 | orchestrator | Thursday 16 April 2026 08:20:30 +0000 (0:00:01.500) 0:34:36.922 ******** 2026-04-16 08:21:05.504605 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.504617 | orchestrator | 2026-04-16 08:21:05.504628 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:21:05.504639 | orchestrator | Thursday 16 April 2026 08:20:31 +0000 (0:00:01.102) 0:34:38.025 ******** 2026-04-16 08:21:05.504650 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.504661 | orchestrator | 2026-04-16 08:21:05.504672 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:21:05.504683 | orchestrator | Thursday 16 April 2026 08:20:32 +0000 (0:00:01.424) 0:34:39.450 ******** 2026-04-16 08:21:05.504693 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.504704 | orchestrator | 2026-04-16 08:21:05.504715 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:21:05.504726 | orchestrator | Thursday 16 April 2026 08:20:33 +0000 (0:00:01.094) 0:34:40.545 ******** 2026-04-16 08:21:05.504737 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.504748 | orchestrator | 2026-04-16 08:21:05.504759 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:21:05.504770 | orchestrator | Thursday 16 April 2026 08:20:34 +0000 (0:00:01.199) 0:34:41.745 ******** 2026-04-16 08:21:05.504781 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.504791 | orchestrator | 2026-04-16 08:21:05.504802 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:21:05.504813 | orchestrator | Thursday 16 April 2026 08:20:36 +0000 (0:00:01.114) 0:34:42.860 ******** 2026-04-16 08:21:05.504824 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-16 08:21:05.504836 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-16 08:21:05.504847 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-16 08:21:05.504858 | orchestrator | 2026-04-16 08:21:05.504869 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:21:05.504882 | orchestrator | Thursday 16 April 2026 08:20:38 +0000 (0:00:01.922) 0:34:44.782 ******** 2026-04-16 08:21:05.504896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 08:21:05.504910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 08:21:05.504922 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 08:21:05.504935 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.504948 | orchestrator | 2026-04-16 08:21:05.504961 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:21:05.504974 | orchestrator | Thursday 16 April 2026 08:20:39 +0000 (0:00:01.168) 0:34:45.950 ******** 2026-04-16 08:21:05.504987 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-16 08:21:05.505001 | orchestrator | 2026-04-16 08:21:05.505029 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:21:05.505066 | orchestrator | Thursday 16 April 2026 08:20:40 +0000 (0:00:01.180) 0:34:47.131 ******** 2026-04-16 08:21:05.505080 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.505093 | orchestrator | 2026-04-16 08:21:05.505105 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:21:05.505115 | orchestrator | Thursday 16 April 2026 08:20:41 +0000 (0:00:01.137) 0:34:48.268 ******** 2026-04-16 08:21:05.505126 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.505137 | orchestrator | 2026-04-16 08:21:05.505148 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:21:05.505159 | orchestrator | Thursday 16 April 2026 08:20:42 +0000 (0:00:01.130) 0:34:49.399 ******** 2026-04-16 08:21:05.505170 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.505181 | orchestrator | 2026-04-16 08:21:05.505196 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:21:05.505215 | orchestrator | Thursday 16 April 2026 08:20:43 +0000 (0:00:01.117) 0:34:50.517 ******** 2026-04-16 08:21:05.505253 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.505277 | orchestrator | 2026-04-16 08:21:05.505294 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:21:05.505312 | orchestrator | Thursday 16 April 2026 08:20:44 +0000 (0:00:01.207) 0:34:51.724 ******** 2026-04-16 08:21:05.505329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:21:05.505369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:21:05.505388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:21:05.505406 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.505424 | orchestrator | 2026-04-16 08:21:05.505442 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:21:05.505461 | orchestrator | Thursday 16 April 2026 08:20:46 +0000 (0:00:01.373) 0:34:53.098 ******** 2026-04-16 08:21:05.505480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:21:05.505498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:21:05.505517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:21:05.505535 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.505554 | orchestrator | 2026-04-16 08:21:05.505572 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:21:05.505628 | orchestrator | Thursday 16 April 2026 08:20:47 +0000 (0:00:01.343) 0:34:54.441 ******** 2026-04-16 08:21:05.505647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:21:05.505665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:21:05.505680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:21:05.505691 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:05.505701 | orchestrator | 2026-04-16 08:21:05.505712 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:21:05.505723 | orchestrator | Thursday 16 April 2026 08:20:49 +0000 (0:00:01.395) 0:34:55.837 ******** 2026-04-16 08:21:05.505734 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.505745 | orchestrator | 2026-04-16 08:21:05.505756 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:21:05.505769 | orchestrator | Thursday 16 April 2026 08:20:50 +0000 (0:00:01.137) 0:34:56.975 ******** 2026-04-16 08:21:05.505788 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:21:05.505806 | orchestrator | 2026-04-16 08:21:05.505824 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:21:05.505843 | orchestrator | Thursday 16 April 2026 08:20:51 +0000 (0:00:01.321) 0:34:58.297 ******** 2026-04-16 08:21:05.505861 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:21:05.505880 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:21:05.505898 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:21:05.505910 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 08:21:05.505921 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:21:05.505932 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:21:05.505943 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:21:05.505954 | orchestrator | 2026-04-16 08:21:05.505965 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:21:05.505976 | orchestrator | Thursday 16 April 2026 08:20:53 +0000 (0:00:02.036) 0:35:00.334 ******** 2026-04-16 08:21:05.505987 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:21:05.505997 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:21:05.506008 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:21:05.506124 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 08:21:05.506138 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:21:05.506149 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:21:05.506160 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:21:05.506171 | orchestrator | 2026-04-16 08:21:05.506182 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-16 08:21:05.506193 | orchestrator | Thursday 16 April 2026 08:20:56 +0000 (0:00:02.770) 0:35:03.104 ******** 2026-04-16 08:21:05.506203 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.506214 | orchestrator | 2026-04-16 08:21:05.506232 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-16 08:21:05.506243 | orchestrator | Thursday 16 April 2026 08:20:57 +0000 (0:00:01.448) 0:35:04.552 ******** 2026-04-16 08:21:05.506254 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.506265 | orchestrator | 2026-04-16 08:21:05.506275 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-16 08:21:05.506286 | orchestrator | Thursday 16 April 2026 08:20:58 +0000 (0:00:01.140) 0:35:05.692 ******** 2026-04-16 08:21:05.506297 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:05.506308 | orchestrator | 2026-04-16 08:21:05.506319 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-16 08:21:05.506329 | orchestrator | Thursday 16 April 2026 08:21:00 +0000 (0:00:01.274) 0:35:06.967 ******** 2026-04-16 08:21:05.506341 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-16 08:21:05.506351 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-16 08:21:05.506362 | orchestrator | 2026-04-16 08:21:05.506373 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:21:05.506384 | orchestrator | Thursday 16 April 2026 08:21:04 +0000 (0:00:04.173) 0:35:11.141 ******** 2026-04-16 08:21:05.506395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-16 08:21:05.506406 | orchestrator | 2026-04-16 08:21:05.506417 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:21:05.506439 | orchestrator | Thursday 16 April 2026 08:21:05 +0000 (0:00:01.110) 0:35:12.251 ******** 2026-04-16 08:21:54.461406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-16 08:21:54.461528 | orchestrator | 2026-04-16 08:21:54.461554 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:21:54.461574 | orchestrator | Thursday 16 April 2026 08:21:06 +0000 (0:00:01.087) 0:35:13.339 ******** 2026-04-16 08:21:54.461592 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.461612 | orchestrator | 2026-04-16 08:21:54.461630 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:21:54.461648 | orchestrator | Thursday 16 April 2026 08:21:07 +0000 (0:00:01.108) 0:35:14.447 ******** 2026-04-16 08:21:54.461668 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.461688 | orchestrator | 2026-04-16 08:21:54.461707 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:21:54.461725 | orchestrator | Thursday 16 April 2026 08:21:09 +0000 (0:00:01.521) 0:35:15.968 ******** 2026-04-16 08:21:54.461745 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.461757 | orchestrator | 2026-04-16 08:21:54.461767 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:21:54.461778 | orchestrator | Thursday 16 April 2026 08:21:10 +0000 (0:00:01.529) 0:35:17.498 ******** 2026-04-16 08:21:54.461789 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.461800 | orchestrator | 2026-04-16 08:21:54.461811 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:21:54.461822 | orchestrator | Thursday 16 April 2026 08:21:12 +0000 (0:00:01.504) 0:35:19.002 ******** 2026-04-16 08:21:54.461860 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.461879 | orchestrator | 2026-04-16 08:21:54.461921 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:21:54.461955 | orchestrator | Thursday 16 April 2026 08:21:13 +0000 (0:00:01.095) 0:35:20.097 ******** 2026-04-16 08:21:54.461974 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.461994 | orchestrator | 2026-04-16 08:21:54.462105 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:21:54.462123 | orchestrator | Thursday 16 April 2026 08:21:14 +0000 (0:00:01.096) 0:35:21.194 ******** 2026-04-16 08:21:54.462136 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462149 | orchestrator | 2026-04-16 08:21:54.462162 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:21:54.462174 | orchestrator | Thursday 16 April 2026 08:21:15 +0000 (0:00:01.143) 0:35:22.338 ******** 2026-04-16 08:21:54.462187 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462198 | orchestrator | 2026-04-16 08:21:54.462209 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:21:54.462220 | orchestrator | Thursday 16 April 2026 08:21:17 +0000 (0:00:01.528) 0:35:23.867 ******** 2026-04-16 08:21:54.462231 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462241 | orchestrator | 2026-04-16 08:21:54.462252 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:21:54.462263 | orchestrator | Thursday 16 April 2026 08:21:18 +0000 (0:00:01.552) 0:35:25.419 ******** 2026-04-16 08:21:54.462274 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462285 | orchestrator | 2026-04-16 08:21:54.462296 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:21:54.462307 | orchestrator | Thursday 16 April 2026 08:21:19 +0000 (0:00:01.100) 0:35:26.520 ******** 2026-04-16 08:21:54.462317 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462328 | orchestrator | 2026-04-16 08:21:54.462339 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:21:54.462350 | orchestrator | Thursday 16 April 2026 08:21:20 +0000 (0:00:01.126) 0:35:27.647 ******** 2026-04-16 08:21:54.462360 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462371 | orchestrator | 2026-04-16 08:21:54.462382 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:21:54.462393 | orchestrator | Thursday 16 April 2026 08:21:22 +0000 (0:00:01.173) 0:35:28.820 ******** 2026-04-16 08:21:54.462403 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462414 | orchestrator | 2026-04-16 08:21:54.462425 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:21:54.462436 | orchestrator | Thursday 16 April 2026 08:21:23 +0000 (0:00:01.129) 0:35:29.950 ******** 2026-04-16 08:21:54.462447 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462457 | orchestrator | 2026-04-16 08:21:54.462468 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:21:54.462479 | orchestrator | Thursday 16 April 2026 08:21:24 +0000 (0:00:01.096) 0:35:31.046 ******** 2026-04-16 08:21:54.462490 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462501 | orchestrator | 2026-04-16 08:21:54.462526 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:21:54.462538 | orchestrator | Thursday 16 April 2026 08:21:25 +0000 (0:00:01.140) 0:35:32.187 ******** 2026-04-16 08:21:54.462549 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462559 | orchestrator | 2026-04-16 08:21:54.462570 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:21:54.462581 | orchestrator | Thursday 16 April 2026 08:21:26 +0000 (0:00:01.102) 0:35:33.290 ******** 2026-04-16 08:21:54.462591 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462602 | orchestrator | 2026-04-16 08:21:54.462613 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:21:54.462624 | orchestrator | Thursday 16 April 2026 08:21:27 +0000 (0:00:01.101) 0:35:34.391 ******** 2026-04-16 08:21:54.462646 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462657 | orchestrator | 2026-04-16 08:21:54.462668 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:21:54.462679 | orchestrator | Thursday 16 April 2026 08:21:28 +0000 (0:00:01.206) 0:35:35.598 ******** 2026-04-16 08:21:54.462689 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.462700 | orchestrator | 2026-04-16 08:21:54.462711 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:21:54.462722 | orchestrator | Thursday 16 April 2026 08:21:29 +0000 (0:00:01.102) 0:35:36.700 ******** 2026-04-16 08:21:54.462733 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462744 | orchestrator | 2026-04-16 08:21:54.462776 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:21:54.462788 | orchestrator | Thursday 16 April 2026 08:21:31 +0000 (0:00:01.133) 0:35:37.834 ******** 2026-04-16 08:21:54.462799 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462810 | orchestrator | 2026-04-16 08:21:54.462821 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:21:54.462831 | orchestrator | Thursday 16 April 2026 08:21:32 +0000 (0:00:01.097) 0:35:38.931 ******** 2026-04-16 08:21:54.462842 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462853 | orchestrator | 2026-04-16 08:21:54.462864 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:21:54.462874 | orchestrator | Thursday 16 April 2026 08:21:33 +0000 (0:00:01.125) 0:35:40.057 ******** 2026-04-16 08:21:54.462885 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462896 | orchestrator | 2026-04-16 08:21:54.462907 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:21:54.462918 | orchestrator | Thursday 16 April 2026 08:21:34 +0000 (0:00:01.109) 0:35:41.166 ******** 2026-04-16 08:21:54.462928 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462939 | orchestrator | 2026-04-16 08:21:54.462950 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:21:54.462961 | orchestrator | Thursday 16 April 2026 08:21:35 +0000 (0:00:01.128) 0:35:42.295 ******** 2026-04-16 08:21:54.462972 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.462982 | orchestrator | 2026-04-16 08:21:54.462993 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:21:54.463006 | orchestrator | Thursday 16 April 2026 08:21:36 +0000 (0:00:01.083) 0:35:43.379 ******** 2026-04-16 08:21:54.463025 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463067 | orchestrator | 2026-04-16 08:21:54.463087 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:21:54.463105 | orchestrator | Thursday 16 April 2026 08:21:37 +0000 (0:00:00.947) 0:35:44.327 ******** 2026-04-16 08:21:54.463123 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463141 | orchestrator | 2026-04-16 08:21:54.463153 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:21:54.463164 | orchestrator | Thursday 16 April 2026 08:21:38 +0000 (0:00:00.972) 0:35:45.299 ******** 2026-04-16 08:21:54.463175 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463185 | orchestrator | 2026-04-16 08:21:54.463196 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:21:54.463207 | orchestrator | Thursday 16 April 2026 08:21:39 +0000 (0:00:01.083) 0:35:46.382 ******** 2026-04-16 08:21:54.463217 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463228 | orchestrator | 2026-04-16 08:21:54.463239 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:21:54.463249 | orchestrator | Thursday 16 April 2026 08:21:40 +0000 (0:00:01.128) 0:35:47.511 ******** 2026-04-16 08:21:54.463260 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463271 | orchestrator | 2026-04-16 08:21:54.463281 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:21:54.463292 | orchestrator | Thursday 16 April 2026 08:21:41 +0000 (0:00:00.899) 0:35:48.412 ******** 2026-04-16 08:21:54.463315 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463326 | orchestrator | 2026-04-16 08:21:54.463336 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:21:54.463347 | orchestrator | Thursday 16 April 2026 08:21:42 +0000 (0:00:00.908) 0:35:49.320 ******** 2026-04-16 08:21:54.463358 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.463368 | orchestrator | 2026-04-16 08:21:54.463379 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:21:54.463390 | orchestrator | Thursday 16 April 2026 08:21:44 +0000 (0:00:01.990) 0:35:51.310 ******** 2026-04-16 08:21:54.463400 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.463411 | orchestrator | 2026-04-16 08:21:54.463422 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:21:54.463433 | orchestrator | Thursday 16 April 2026 08:21:46 +0000 (0:00:02.179) 0:35:53.489 ******** 2026-04-16 08:21:54.463443 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-16 08:21:54.463454 | orchestrator | 2026-04-16 08:21:54.463465 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:21:54.463476 | orchestrator | Thursday 16 April 2026 08:21:47 +0000 (0:00:01.079) 0:35:54.569 ******** 2026-04-16 08:21:54.463486 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463497 | orchestrator | 2026-04-16 08:21:54.463513 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:21:54.463524 | orchestrator | Thursday 16 April 2026 08:21:48 +0000 (0:00:01.108) 0:35:55.678 ******** 2026-04-16 08:21:54.463534 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463545 | orchestrator | 2026-04-16 08:21:54.463556 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:21:54.463566 | orchestrator | Thursday 16 April 2026 08:21:50 +0000 (0:00:01.105) 0:35:56.783 ******** 2026-04-16 08:21:54.463577 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:21:54.463588 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:21:54.463598 | orchestrator | 2026-04-16 08:21:54.463609 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:21:54.463620 | orchestrator | Thursday 16 April 2026 08:21:51 +0000 (0:00:01.838) 0:35:58.623 ******** 2026-04-16 08:21:54.463631 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:21:54.463641 | orchestrator | 2026-04-16 08:21:54.463652 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:21:54.463663 | orchestrator | Thursday 16 April 2026 08:21:53 +0000 (0:00:01.476) 0:36:00.099 ******** 2026-04-16 08:21:54.463674 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:21:54.463684 | orchestrator | 2026-04-16 08:21:54.463695 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:21:54.463714 | orchestrator | Thursday 16 April 2026 08:21:54 +0000 (0:00:01.106) 0:36:01.206 ******** 2026-04-16 08:22:39.766824 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.766931 | orchestrator | 2026-04-16 08:22:39.766947 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:22:39.766960 | orchestrator | Thursday 16 April 2026 08:21:55 +0000 (0:00:01.113) 0:36:02.319 ******** 2026-04-16 08:22:39.766971 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.766981 | orchestrator | 2026-04-16 08:22:39.766991 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:22:39.767001 | orchestrator | Thursday 16 April 2026 08:21:56 +0000 (0:00:01.100) 0:36:03.420 ******** 2026-04-16 08:22:39.767011 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-16 08:22:39.767021 | orchestrator | 2026-04-16 08:22:39.767031 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:22:39.767103 | orchestrator | Thursday 16 April 2026 08:21:57 +0000 (0:00:01.215) 0:36:04.636 ******** 2026-04-16 08:22:39.767140 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:22:39.767151 | orchestrator | 2026-04-16 08:22:39.767161 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:22:39.767172 | orchestrator | Thursday 16 April 2026 08:21:59 +0000 (0:00:01.684) 0:36:06.321 ******** 2026-04-16 08:22:39.767181 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:22:39.767191 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:22:39.767201 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:22:39.767210 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767220 | orchestrator | 2026-04-16 08:22:39.767229 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:22:39.767239 | orchestrator | Thursday 16 April 2026 08:22:00 +0000 (0:00:01.100) 0:36:07.421 ******** 2026-04-16 08:22:39.767249 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767258 | orchestrator | 2026-04-16 08:22:39.767268 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:22:39.767278 | orchestrator | Thursday 16 April 2026 08:22:01 +0000 (0:00:01.086) 0:36:08.508 ******** 2026-04-16 08:22:39.767287 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767297 | orchestrator | 2026-04-16 08:22:39.767306 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:22:39.767316 | orchestrator | Thursday 16 April 2026 08:22:02 +0000 (0:00:01.129) 0:36:09.638 ******** 2026-04-16 08:22:39.767325 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767336 | orchestrator | 2026-04-16 08:22:39.767346 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:22:39.767356 | orchestrator | Thursday 16 April 2026 08:22:04 +0000 (0:00:01.128) 0:36:10.766 ******** 2026-04-16 08:22:39.767365 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767377 | orchestrator | 2026-04-16 08:22:39.767388 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:22:39.767399 | orchestrator | Thursday 16 April 2026 08:22:05 +0000 (0:00:01.102) 0:36:11.869 ******** 2026-04-16 08:22:39.767409 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767420 | orchestrator | 2026-04-16 08:22:39.767431 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:22:39.767443 | orchestrator | Thursday 16 April 2026 08:22:06 +0000 (0:00:01.115) 0:36:12.985 ******** 2026-04-16 08:22:39.767453 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:22:39.767464 | orchestrator | 2026-04-16 08:22:39.767476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:22:39.767487 | orchestrator | Thursday 16 April 2026 08:22:08 +0000 (0:00:02.719) 0:36:15.705 ******** 2026-04-16 08:22:39.767498 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:22:39.767509 | orchestrator | 2026-04-16 08:22:39.767520 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:22:39.767531 | orchestrator | Thursday 16 April 2026 08:22:10 +0000 (0:00:01.108) 0:36:16.813 ******** 2026-04-16 08:22:39.767542 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-16 08:22:39.767553 | orchestrator | 2026-04-16 08:22:39.767564 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:22:39.767575 | orchestrator | Thursday 16 April 2026 08:22:11 +0000 (0:00:01.084) 0:36:17.898 ******** 2026-04-16 08:22:39.767587 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767598 | orchestrator | 2026-04-16 08:22:39.767624 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:22:39.767635 | orchestrator | Thursday 16 April 2026 08:22:12 +0000 (0:00:01.098) 0:36:18.996 ******** 2026-04-16 08:22:39.767647 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767658 | orchestrator | 2026-04-16 08:22:39.767668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:22:39.767688 | orchestrator | Thursday 16 April 2026 08:22:13 +0000 (0:00:01.139) 0:36:20.136 ******** 2026-04-16 08:22:39.767699 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767710 | orchestrator | 2026-04-16 08:22:39.767722 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:22:39.767732 | orchestrator | Thursday 16 April 2026 08:22:14 +0000 (0:00:01.123) 0:36:21.259 ******** 2026-04-16 08:22:39.767742 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767752 | orchestrator | 2026-04-16 08:22:39.767761 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:22:39.767771 | orchestrator | Thursday 16 April 2026 08:22:15 +0000 (0:00:01.111) 0:36:22.370 ******** 2026-04-16 08:22:39.767780 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767790 | orchestrator | 2026-04-16 08:22:39.767800 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:22:39.767809 | orchestrator | Thursday 16 April 2026 08:22:16 +0000 (0:00:01.131) 0:36:23.502 ******** 2026-04-16 08:22:39.767819 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767829 | orchestrator | 2026-04-16 08:22:39.767855 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:22:39.767865 | orchestrator | Thursday 16 April 2026 08:22:17 +0000 (0:00:01.116) 0:36:24.619 ******** 2026-04-16 08:22:39.767875 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767884 | orchestrator | 2026-04-16 08:22:39.767894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:22:39.767903 | orchestrator | Thursday 16 April 2026 08:22:19 +0000 (0:00:01.177) 0:36:25.796 ******** 2026-04-16 08:22:39.767913 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.767923 | orchestrator | 2026-04-16 08:22:39.767932 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:22:39.767942 | orchestrator | Thursday 16 April 2026 08:22:20 +0000 (0:00:01.108) 0:36:26.905 ******** 2026-04-16 08:22:39.767952 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:22:39.767961 | orchestrator | 2026-04-16 08:22:39.767971 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:22:39.767981 | orchestrator | Thursday 16 April 2026 08:22:21 +0000 (0:00:01.143) 0:36:28.049 ******** 2026-04-16 08:22:39.767990 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-16 08:22:39.768000 | orchestrator | 2026-04-16 08:22:39.768010 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:22:39.768019 | orchestrator | Thursday 16 April 2026 08:22:22 +0000 (0:00:01.085) 0:36:29.135 ******** 2026-04-16 08:22:39.768029 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-16 08:22:39.768039 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-16 08:22:39.768065 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-16 08:22:39.768075 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-16 08:22:39.768084 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-16 08:22:39.768094 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-16 08:22:39.768103 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-16 08:22:39.768113 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:22:39.768123 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:22:39.768132 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:22:39.768142 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:22:39.768152 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:22:39.768161 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:22:39.768171 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:22:39.768181 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-16 08:22:39.768198 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-16 08:22:39.768208 | orchestrator | 2026-04-16 08:22:39.768217 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:22:39.768227 | orchestrator | Thursday 16 April 2026 08:22:29 +0000 (0:00:06.630) 0:36:35.766 ******** 2026-04-16 08:22:39.768236 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-16 08:22:39.768246 | orchestrator | 2026-04-16 08:22:39.768256 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:22:39.768265 | orchestrator | Thursday 16 April 2026 08:22:30 +0000 (0:00:01.600) 0:36:37.367 ******** 2026-04-16 08:22:39.768275 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:22:39.768286 | orchestrator | 2026-04-16 08:22:39.768295 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:22:39.768305 | orchestrator | Thursday 16 April 2026 08:22:32 +0000 (0:00:01.498) 0:36:38.865 ******** 2026-04-16 08:22:39.768315 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:22:39.768325 | orchestrator | 2026-04-16 08:22:39.768334 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:22:39.768344 | orchestrator | Thursday 16 April 2026 08:22:34 +0000 (0:00:01.984) 0:36:40.850 ******** 2026-04-16 08:22:39.768353 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.768363 | orchestrator | 2026-04-16 08:22:39.768378 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:22:39.768387 | orchestrator | Thursday 16 April 2026 08:22:35 +0000 (0:00:01.126) 0:36:41.976 ******** 2026-04-16 08:22:39.768397 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.768407 | orchestrator | 2026-04-16 08:22:39.768416 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:22:39.768426 | orchestrator | Thursday 16 April 2026 08:22:36 +0000 (0:00:01.087) 0:36:43.064 ******** 2026-04-16 08:22:39.768436 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.768445 | orchestrator | 2026-04-16 08:22:39.768455 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:22:39.768464 | orchestrator | Thursday 16 April 2026 08:22:37 +0000 (0:00:01.101) 0:36:44.166 ******** 2026-04-16 08:22:39.768474 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.768484 | orchestrator | 2026-04-16 08:22:39.768493 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:22:39.768503 | orchestrator | Thursday 16 April 2026 08:22:38 +0000 (0:00:01.077) 0:36:45.244 ******** 2026-04-16 08:22:39.768526 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.768536 | orchestrator | 2026-04-16 08:22:39.768556 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:22:39.768566 | orchestrator | Thursday 16 April 2026 08:22:39 +0000 (0:00:01.131) 0:36:46.376 ******** 2026-04-16 08:22:39.768576 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:22:39.768585 | orchestrator | 2026-04-16 08:22:39.768601 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:23:30.021787 | orchestrator | Thursday 16 April 2026 08:22:40 +0000 (0:00:01.140) 0:36:47.516 ******** 2026-04-16 08:23:30.021921 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.021945 | orchestrator | 2026-04-16 08:23:30.021963 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:23:30.021980 | orchestrator | Thursday 16 April 2026 08:22:41 +0000 (0:00:01.112) 0:36:48.628 ******** 2026-04-16 08:23:30.021994 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022008 | orchestrator | 2026-04-16 08:23:30.022130 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:23:30.022178 | orchestrator | Thursday 16 April 2026 08:22:42 +0000 (0:00:01.102) 0:36:49.731 ******** 2026-04-16 08:23:30.022194 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022208 | orchestrator | 2026-04-16 08:23:30.022222 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:23:30.022236 | orchestrator | Thursday 16 April 2026 08:22:44 +0000 (0:00:01.108) 0:36:50.840 ******** 2026-04-16 08:23:30.022250 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022264 | orchestrator | 2026-04-16 08:23:30.022278 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:23:30.022292 | orchestrator | Thursday 16 April 2026 08:22:45 +0000 (0:00:01.106) 0:36:51.946 ******** 2026-04-16 08:23:30.022307 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.022323 | orchestrator | 2026-04-16 08:23:30.022339 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:23:30.022355 | orchestrator | Thursday 16 April 2026 08:22:46 +0000 (0:00:01.181) 0:36:53.127 ******** 2026-04-16 08:23:30.022371 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:23:30.022385 | orchestrator | 2026-04-16 08:23:30.022401 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:23:30.022415 | orchestrator | Thursday 16 April 2026 08:22:50 +0000 (0:00:04.373) 0:36:57.501 ******** 2026-04-16 08:23:30.022430 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:23:30.022446 | orchestrator | 2026-04-16 08:23:30.022460 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:23:30.022475 | orchestrator | Thursday 16 April 2026 08:22:51 +0000 (0:00:01.148) 0:36:58.649 ******** 2026-04-16 08:23:30.022493 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-16 08:23:30.022512 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-16 08:23:30.022528 | orchestrator | 2026-04-16 08:23:30.022542 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:23:30.022556 | orchestrator | Thursday 16 April 2026 08:22:59 +0000 (0:00:08.090) 0:37:06.739 ******** 2026-04-16 08:23:30.022571 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022586 | orchestrator | 2026-04-16 08:23:30.022600 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:23:30.022615 | orchestrator | Thursday 16 April 2026 08:23:01 +0000 (0:00:01.138) 0:37:07.878 ******** 2026-04-16 08:23:30.022630 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022645 | orchestrator | 2026-04-16 08:23:30.022661 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:23:30.022675 | orchestrator | Thursday 16 April 2026 08:23:02 +0000 (0:00:01.117) 0:37:08.995 ******** 2026-04-16 08:23:30.022708 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022724 | orchestrator | 2026-04-16 08:23:30.022739 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:23:30.022754 | orchestrator | Thursday 16 April 2026 08:23:03 +0000 (0:00:01.115) 0:37:10.111 ******** 2026-04-16 08:23:30.022768 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022782 | orchestrator | 2026-04-16 08:23:30.022796 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:23:30.022811 | orchestrator | Thursday 16 April 2026 08:23:04 +0000 (0:00:01.134) 0:37:11.245 ******** 2026-04-16 08:23:30.022842 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022859 | orchestrator | 2026-04-16 08:23:30.022874 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:23:30.022889 | orchestrator | Thursday 16 April 2026 08:23:05 +0000 (0:00:01.165) 0:37:12.411 ******** 2026-04-16 08:23:30.022904 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.022917 | orchestrator | 2026-04-16 08:23:30.022925 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:23:30.022934 | orchestrator | Thursday 16 April 2026 08:23:06 +0000 (0:00:01.247) 0:37:13.659 ******** 2026-04-16 08:23:30.022943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:23:30.022952 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:23:30.022961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:23:30.022970 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.022978 | orchestrator | 2026-04-16 08:23:30.022987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:23:30.023019 | orchestrator | Thursday 16 April 2026 08:23:08 +0000 (0:00:01.368) 0:37:15.028 ******** 2026-04-16 08:23:30.023028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:23:30.023037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:23:30.023074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:23:30.023089 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.023114 | orchestrator | 2026-04-16 08:23:30.023129 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:23:30.023143 | orchestrator | Thursday 16 April 2026 08:23:09 +0000 (0:00:01.699) 0:37:16.728 ******** 2026-04-16 08:23:30.023156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:23:30.023169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:23:30.023183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:23:30.023197 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.023210 | orchestrator | 2026-04-16 08:23:30.023225 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:23:30.023240 | orchestrator | Thursday 16 April 2026 08:23:11 +0000 (0:00:01.763) 0:37:18.492 ******** 2026-04-16 08:23:30.023254 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.023268 | orchestrator | 2026-04-16 08:23:30.023283 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:23:30.023297 | orchestrator | Thursday 16 April 2026 08:23:12 +0000 (0:00:01.160) 0:37:19.652 ******** 2026-04-16 08:23:30.023311 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:23:30.023325 | orchestrator | 2026-04-16 08:23:30.023341 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:23:30.023355 | orchestrator | Thursday 16 April 2026 08:23:14 +0000 (0:00:01.340) 0:37:20.992 ******** 2026-04-16 08:23:30.023370 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.023382 | orchestrator | 2026-04-16 08:23:30.023391 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-16 08:23:30.023400 | orchestrator | Thursday 16 April 2026 08:23:15 +0000 (0:00:01.754) 0:37:22.747 ******** 2026-04-16 08:23:30.023408 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.023417 | orchestrator | 2026-04-16 08:23:30.023425 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:23:30.023434 | orchestrator | Thursday 16 April 2026 08:23:17 +0000 (0:00:01.119) 0:37:23.866 ******** 2026-04-16 08:23:30.023443 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:23:30.023452 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:23:30.023461 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:23:30.023481 | orchestrator | 2026-04-16 08:23:30.023490 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-16 08:23:30.023499 | orchestrator | Thursday 16 April 2026 08:23:18 +0000 (0:00:01.625) 0:37:25.492 ******** 2026-04-16 08:23:30.023507 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-04-16 08:23:30.023516 | orchestrator | 2026-04-16 08:23:30.023525 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-16 08:23:30.023534 | orchestrator | Thursday 16 April 2026 08:23:20 +0000 (0:00:01.439) 0:37:26.932 ******** 2026-04-16 08:23:30.023542 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.023551 | orchestrator | 2026-04-16 08:23:30.023559 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-16 08:23:30.023568 | orchestrator | Thursday 16 April 2026 08:23:21 +0000 (0:00:01.126) 0:37:28.059 ******** 2026-04-16 08:23:30.023577 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.023585 | orchestrator | 2026-04-16 08:23:30.023594 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-16 08:23:30.023602 | orchestrator | Thursday 16 April 2026 08:23:22 +0000 (0:00:01.132) 0:37:29.191 ******** 2026-04-16 08:23:30.023611 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.023620 | orchestrator | 2026-04-16 08:23:30.023628 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-16 08:23:30.023637 | orchestrator | Thursday 16 April 2026 08:23:23 +0000 (0:00:01.437) 0:37:30.629 ******** 2026-04-16 08:23:30.023645 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:23:30.023654 | orchestrator | 2026-04-16 08:23:30.023671 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-16 08:23:30.023679 | orchestrator | Thursday 16 April 2026 08:23:25 +0000 (0:00:01.156) 0:37:31.785 ******** 2026-04-16 08:23:30.023688 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-16 08:23:30.023697 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-16 08:23:30.023708 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-16 08:23:30.023725 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-16 08:23:30.023747 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-16 08:23:30.023761 | orchestrator | 2026-04-16 08:23:30.023775 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-16 08:23:30.023788 | orchestrator | Thursday 16 April 2026 08:23:28 +0000 (0:00:03.649) 0:37:35.435 ******** 2026-04-16 08:23:30.023803 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:23:30.023818 | orchestrator | 2026-04-16 08:23:30.023832 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-16 08:23:30.023846 | orchestrator | Thursday 16 April 2026 08:23:29 +0000 (0:00:01.124) 0:37:36.560 ******** 2026-04-16 08:23:30.023860 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-04-16 08:23:30.023874 | orchestrator | 2026-04-16 08:23:30.023888 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-16 08:24:35.951506 | orchestrator | Thursday 16 April 2026 08:23:31 +0000 (0:00:01.469) 0:37:38.030 ******** 2026-04-16 08:24:35.951650 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-16 08:24:35.951676 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-04-16 08:24:35.951696 | orchestrator | 2026-04-16 08:24:35.951716 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-16 08:24:35.951735 | orchestrator | Thursday 16 April 2026 08:23:33 +0000 (0:00:01.783) 0:37:39.814 ******** 2026-04-16 08:24:35.951754 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:24:35.951773 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 08:24:35.951790 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:24:35.951842 | orchestrator | 2026-04-16 08:24:35.951861 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:24:35.951880 | orchestrator | Thursday 16 April 2026 08:23:36 +0000 (0:00:03.287) 0:37:43.101 ******** 2026-04-16 08:24:35.951898 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 08:24:35.951916 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 08:24:35.951933 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:24:35.951951 | orchestrator | 2026-04-16 08:24:35.951969 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-16 08:24:35.951987 | orchestrator | Thursday 16 April 2026 08:23:38 +0000 (0:00:01.985) 0:37:45.087 ******** 2026-04-16 08:24:35.952006 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.952029 | orchestrator | 2026-04-16 08:24:35.952085 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-16 08:24:35.952109 | orchestrator | Thursday 16 April 2026 08:23:39 +0000 (0:00:01.229) 0:37:46.316 ******** 2026-04-16 08:24:35.952131 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.952153 | orchestrator | 2026-04-16 08:24:35.952173 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-16 08:24:35.952194 | orchestrator | Thursday 16 April 2026 08:23:40 +0000 (0:00:01.146) 0:37:47.463 ******** 2026-04-16 08:24:35.952216 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.952237 | orchestrator | 2026-04-16 08:24:35.952259 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-16 08:24:35.952280 | orchestrator | Thursday 16 April 2026 08:23:41 +0000 (0:00:01.111) 0:37:48.575 ******** 2026-04-16 08:24:35.952300 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-04-16 08:24:35.952321 | orchestrator | 2026-04-16 08:24:35.952340 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-16 08:24:35.952360 | orchestrator | Thursday 16 April 2026 08:23:43 +0000 (0:00:01.455) 0:37:50.030 ******** 2026-04-16 08:24:35.952378 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:24:35.952395 | orchestrator | 2026-04-16 08:24:35.952413 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-16 08:24:35.952430 | orchestrator | Thursday 16 April 2026 08:23:44 +0000 (0:00:01.520) 0:37:51.551 ******** 2026-04-16 08:24:35.952448 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:24:35.952465 | orchestrator | 2026-04-16 08:24:35.952482 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-16 08:24:35.952500 | orchestrator | Thursday 16 April 2026 08:23:48 +0000 (0:00:03.971) 0:37:55.523 ******** 2026-04-16 08:24:35.952518 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-04-16 08:24:35.952536 | orchestrator | 2026-04-16 08:24:35.952554 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-16 08:24:35.952571 | orchestrator | Thursday 16 April 2026 08:23:50 +0000 (0:00:01.471) 0:37:56.994 ******** 2026-04-16 08:24:35.952589 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:24:35.952606 | orchestrator | 2026-04-16 08:24:35.952625 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-16 08:24:35.952645 | orchestrator | Thursday 16 April 2026 08:23:52 +0000 (0:00:01.961) 0:37:58.955 ******** 2026-04-16 08:24:35.952663 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:24:35.952680 | orchestrator | 2026-04-16 08:24:35.952697 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-16 08:24:35.952713 | orchestrator | Thursday 16 April 2026 08:23:54 +0000 (0:00:01.960) 0:38:00.916 ******** 2026-04-16 08:24:35.952730 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:24:35.952748 | orchestrator | 2026-04-16 08:24:35.952789 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-16 08:24:35.952802 | orchestrator | Thursday 16 April 2026 08:23:56 +0000 (0:00:02.267) 0:38:03.184 ******** 2026-04-16 08:24:35.952813 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.952824 | orchestrator | 2026-04-16 08:24:35.952834 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-16 08:24:35.952859 | orchestrator | Thursday 16 April 2026 08:23:57 +0000 (0:00:01.132) 0:38:04.316 ******** 2026-04-16 08:24:35.952870 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.952881 | orchestrator | 2026-04-16 08:24:35.952892 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-16 08:24:35.952902 | orchestrator | Thursday 16 April 2026 08:23:58 +0000 (0:00:01.174) 0:38:05.491 ******** 2026-04-16 08:24:35.952913 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:24:35.952924 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-16 08:24:35.952934 | orchestrator | 2026-04-16 08:24:35.952945 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-16 08:24:35.952956 | orchestrator | Thursday 16 April 2026 08:24:00 +0000 (0:00:01.948) 0:38:07.439 ******** 2026-04-16 08:24:35.952966 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:24:35.952977 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-04-16 08:24:35.952988 | orchestrator | 2026-04-16 08:24:35.952999 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-16 08:24:35.953009 | orchestrator | Thursday 16 April 2026 08:24:03 +0000 (0:00:02.922) 0:38:10.362 ******** 2026-04-16 08:24:35.953020 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-04-16 08:24:35.953088 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-04-16 08:24:35.953101 | orchestrator | 2026-04-16 08:24:35.953112 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-16 08:24:35.953123 | orchestrator | Thursday 16 April 2026 08:24:08 +0000 (0:00:04.674) 0:38:15.036 ******** 2026-04-16 08:24:35.953133 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.953144 | orchestrator | 2026-04-16 08:24:35.953156 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-16 08:24:35.953167 | orchestrator | Thursday 16 April 2026 08:24:09 +0000 (0:00:01.198) 0:38:16.235 ******** 2026-04-16 08:24:35.953177 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.953188 | orchestrator | 2026-04-16 08:24:35.953199 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-16 08:24:35.953210 | orchestrator | Thursday 16 April 2026 08:24:10 +0000 (0:00:01.202) 0:38:17.437 ******** 2026-04-16 08:24:35.953220 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.953231 | orchestrator | 2026-04-16 08:24:35.953242 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-16 08:24:35.953252 | orchestrator | Thursday 16 April 2026 08:24:12 +0000 (0:00:01.344) 0:38:18.782 ******** 2026-04-16 08:24:35.953263 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.953274 | orchestrator | 2026-04-16 08:24:35.953284 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-16 08:24:35.953296 | orchestrator | Thursday 16 April 2026 08:24:13 +0000 (0:00:01.070) 0:38:19.853 ******** 2026-04-16 08:24:35.953306 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:24:35.953317 | orchestrator | 2026-04-16 08:24:35.953327 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-16 08:24:35.953338 | orchestrator | Thursday 16 April 2026 08:24:14 +0000 (0:00:01.074) 0:38:20.928 ******** 2026-04-16 08:24:35.953349 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-16 08:24:35.953362 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-16 08:24:35.953373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:24:35.953384 | orchestrator | 2026-04-16 08:24:35.953394 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-16 08:24:35.953405 | orchestrator | 2026-04-16 08:24:35.953416 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:24:35.953427 | orchestrator | Thursday 16 April 2026 08:24:22 +0000 (0:00:08.013) 0:38:28.941 ******** 2026-04-16 08:24:35.953438 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-16 08:24:35.953456 | orchestrator | 2026-04-16 08:24:35.953467 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:24:35.953478 | orchestrator | Thursday 16 April 2026 08:24:23 +0000 (0:00:01.072) 0:38:30.013 ******** 2026-04-16 08:24:35.953488 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953499 | orchestrator | 2026-04-16 08:24:35.953510 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:24:35.953521 | orchestrator | Thursday 16 April 2026 08:24:24 +0000 (0:00:01.481) 0:38:31.495 ******** 2026-04-16 08:24:35.953531 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953542 | orchestrator | 2026-04-16 08:24:35.953552 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:24:35.953563 | orchestrator | Thursday 16 April 2026 08:24:25 +0000 (0:00:01.139) 0:38:32.634 ******** 2026-04-16 08:24:35.953574 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953585 | orchestrator | 2026-04-16 08:24:35.953595 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:24:35.953606 | orchestrator | Thursday 16 April 2026 08:24:27 +0000 (0:00:01.445) 0:38:34.080 ******** 2026-04-16 08:24:35.953617 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953628 | orchestrator | 2026-04-16 08:24:35.953638 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:24:35.953649 | orchestrator | Thursday 16 April 2026 08:24:28 +0000 (0:00:01.193) 0:38:35.274 ******** 2026-04-16 08:24:35.953660 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953670 | orchestrator | 2026-04-16 08:24:35.953681 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:24:35.953692 | orchestrator | Thursday 16 April 2026 08:24:29 +0000 (0:00:01.114) 0:38:36.388 ******** 2026-04-16 08:24:35.953708 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953720 | orchestrator | 2026-04-16 08:24:35.953731 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:24:35.953741 | orchestrator | Thursday 16 April 2026 08:24:30 +0000 (0:00:01.143) 0:38:37.531 ******** 2026-04-16 08:24:35.953752 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:35.953763 | orchestrator | 2026-04-16 08:24:35.953774 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:24:35.953785 | orchestrator | Thursday 16 April 2026 08:24:31 +0000 (0:00:01.117) 0:38:38.648 ******** 2026-04-16 08:24:35.953796 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953806 | orchestrator | 2026-04-16 08:24:35.953817 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:24:35.953828 | orchestrator | Thursday 16 April 2026 08:24:33 +0000 (0:00:01.149) 0:38:39.798 ******** 2026-04-16 08:24:35.953838 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:24:35.953849 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:24:35.953860 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:24:35.953870 | orchestrator | 2026-04-16 08:24:35.953881 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:24:35.953892 | orchestrator | Thursday 16 April 2026 08:24:34 +0000 (0:00:01.658) 0:38:41.456 ******** 2026-04-16 08:24:35.953903 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:35.953913 | orchestrator | 2026-04-16 08:24:35.953924 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:24:35.953941 | orchestrator | Thursday 16 April 2026 08:24:35 +0000 (0:00:01.238) 0:38:42.695 ******** 2026-04-16 08:24:59.284776 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:24:59.284891 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:24:59.284907 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:24:59.284945 | orchestrator | 2026-04-16 08:24:59.284958 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:24:59.284970 | orchestrator | Thursday 16 April 2026 08:24:38 +0000 (0:00:02.813) 0:38:45.508 ******** 2026-04-16 08:24:59.284982 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 08:24:59.284994 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 08:24:59.285021 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 08:24:59.285043 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285084 | orchestrator | 2026-04-16 08:24:59.285097 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:24:59.285109 | orchestrator | Thursday 16 April 2026 08:24:40 +0000 (0:00:01.413) 0:38:46.922 ******** 2026-04-16 08:24:59.285122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:24:59.285136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:24:59.285148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:24:59.285158 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285169 | orchestrator | 2026-04-16 08:24:59.285180 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:24:59.285191 | orchestrator | Thursday 16 April 2026 08:24:41 +0000 (0:00:01.600) 0:38:48.522 ******** 2026-04-16 08:24:59.285204 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:24:59.285218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:24:59.285244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:24:59.285255 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285266 | orchestrator | 2026-04-16 08:24:59.285277 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:24:59.285288 | orchestrator | Thursday 16 April 2026 08:24:42 +0000 (0:00:01.133) 0:38:49.655 ******** 2026-04-16 08:24:59.285303 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:24:36.468546', 'end': '2026-04-16 08:24:36.520543', 'delta': '0:00:00.051997', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:24:59.285347 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:24:37.014214', 'end': '2026-04-16 08:24:37.074727', 'delta': '0:00:00.060513', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:24:59.285362 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:24:37.583576', 'end': '2026-04-16 08:24:37.614115', 'delta': '0:00:00.030539', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:24:59.285375 | orchestrator | 2026-04-16 08:24:59.285387 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:24:59.285400 | orchestrator | Thursday 16 April 2026 08:24:44 +0000 (0:00:01.183) 0:38:50.839 ******** 2026-04-16 08:24:59.285412 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:59.285425 | orchestrator | 2026-04-16 08:24:59.285437 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:24:59.285450 | orchestrator | Thursday 16 April 2026 08:24:45 +0000 (0:00:01.250) 0:38:52.089 ******** 2026-04-16 08:24:59.285462 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285474 | orchestrator | 2026-04-16 08:24:59.285488 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:24:59.285501 | orchestrator | Thursday 16 April 2026 08:24:46 +0000 (0:00:01.247) 0:38:53.336 ******** 2026-04-16 08:24:59.285513 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:59.285526 | orchestrator | 2026-04-16 08:24:59.285539 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:24:59.285551 | orchestrator | Thursday 16 April 2026 08:24:47 +0000 (0:00:01.117) 0:38:54.453 ******** 2026-04-16 08:24:59.285564 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:24:59.285577 | orchestrator | 2026-04-16 08:24:59.285589 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:24:59.285602 | orchestrator | Thursday 16 April 2026 08:24:50 +0000 (0:00:02.316) 0:38:56.770 ******** 2026-04-16 08:24:59.285614 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:59.285626 | orchestrator | 2026-04-16 08:24:59.285639 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:24:59.285652 | orchestrator | Thursday 16 April 2026 08:24:51 +0000 (0:00:01.162) 0:38:57.933 ******** 2026-04-16 08:24:59.285664 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285676 | orchestrator | 2026-04-16 08:24:59.285687 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:24:59.285698 | orchestrator | Thursday 16 April 2026 08:24:52 +0000 (0:00:01.113) 0:38:59.046 ******** 2026-04-16 08:24:59.285719 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285731 | orchestrator | 2026-04-16 08:24:59.285746 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:24:59.285758 | orchestrator | Thursday 16 April 2026 08:24:53 +0000 (0:00:01.225) 0:39:00.272 ******** 2026-04-16 08:24:59.285769 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285780 | orchestrator | 2026-04-16 08:24:59.285790 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:24:59.285801 | orchestrator | Thursday 16 April 2026 08:24:54 +0000 (0:00:01.117) 0:39:01.390 ******** 2026-04-16 08:24:59.285812 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285823 | orchestrator | 2026-04-16 08:24:59.285834 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:24:59.285845 | orchestrator | Thursday 16 April 2026 08:24:55 +0000 (0:00:01.132) 0:39:02.522 ******** 2026-04-16 08:24:59.285856 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:59.285866 | orchestrator | 2026-04-16 08:24:59.285877 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:24:59.285888 | orchestrator | Thursday 16 April 2026 08:24:56 +0000 (0:00:01.129) 0:39:03.652 ******** 2026-04-16 08:24:59.285899 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285910 | orchestrator | 2026-04-16 08:24:59.285920 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:24:59.285931 | orchestrator | Thursday 16 April 2026 08:24:58 +0000 (0:00:01.103) 0:39:04.755 ******** 2026-04-16 08:24:59.285942 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:24:59.285953 | orchestrator | 2026-04-16 08:24:59.285964 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:24:59.285975 | orchestrator | Thursday 16 April 2026 08:24:59 +0000 (0:00:01.140) 0:39:05.895 ******** 2026-04-16 08:24:59.285985 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:24:59.285996 | orchestrator | 2026-04-16 08:24:59.286014 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:25:01.628865 | orchestrator | Thursday 16 April 2026 08:25:00 +0000 (0:00:01.127) 0:39:07.023 ******** 2026-04-16 08:25:01.628946 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:01.628954 | orchestrator | 2026-04-16 08:25:01.628959 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:25:01.628964 | orchestrator | Thursday 16 April 2026 08:25:01 +0000 (0:00:01.125) 0:39:08.149 ******** 2026-04-16 08:25:01.628971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:01.628980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}})  2026-04-16 08:25:01.628989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:25:01.629011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}})  2026-04-16 08:25:01.629028 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:01.629033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:01.629093 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:25:01.629100 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:01.629105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:25:01.629110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:01.629119 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}})  2026-04-16 08:25:01.629127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}})  2026-04-16 08:25:01.629132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:01.629145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:25:02.894606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:02.894697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:25:02.894725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:25:02.894737 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:02.894747 | orchestrator | 2026-04-16 08:25:02.894756 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:25:02.894766 | orchestrator | Thursday 16 April 2026 08:25:02 +0000 (0:00:01.314) 0:39:09.464 ******** 2026-04-16 08:25:02.894775 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894860 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894869 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894888 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:02.894902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235157 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235306 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235365 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235405 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:25:08.235451 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:08.235464 | orchestrator | 2026-04-16 08:25:08.235482 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:25:08.235504 | orchestrator | Thursday 16 April 2026 08:25:04 +0000 (0:00:01.372) 0:39:10.837 ******** 2026-04-16 08:25:08.235523 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:08.235543 | orchestrator | 2026-04-16 08:25:08.235563 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:25:08.235583 | orchestrator | Thursday 16 April 2026 08:25:05 +0000 (0:00:01.527) 0:39:12.364 ******** 2026-04-16 08:25:08.235603 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:08.235624 | orchestrator | 2026-04-16 08:25:08.235646 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:25:08.235667 | orchestrator | Thursday 16 April 2026 08:25:06 +0000 (0:00:01.122) 0:39:13.487 ******** 2026-04-16 08:25:08.235687 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:08.235707 | orchestrator | 2026-04-16 08:25:08.235727 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:25:08.235757 | orchestrator | Thursday 16 April 2026 08:25:08 +0000 (0:00:01.498) 0:39:14.985 ******** 2026-04-16 08:25:48.191160 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.191309 | orchestrator | 2026-04-16 08:25:48.191994 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:25:48.192027 | orchestrator | Thursday 16 April 2026 08:25:09 +0000 (0:00:01.159) 0:39:16.145 ******** 2026-04-16 08:25:48.192049 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192094 | orchestrator | 2026-04-16 08:25:48.192107 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:25:48.192119 | orchestrator | Thursday 16 April 2026 08:25:10 +0000 (0:00:01.233) 0:39:17.378 ******** 2026-04-16 08:25:48.192129 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192140 | orchestrator | 2026-04-16 08:25:48.192151 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:25:48.192162 | orchestrator | Thursday 16 April 2026 08:25:11 +0000 (0:00:01.114) 0:39:18.493 ******** 2026-04-16 08:25:48.192175 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-16 08:25:48.192186 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-16 08:25:48.192197 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-16 08:25:48.192208 | orchestrator | 2026-04-16 08:25:48.192218 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:25:48.192230 | orchestrator | Thursday 16 April 2026 08:25:13 +0000 (0:00:01.654) 0:39:20.147 ******** 2026-04-16 08:25:48.192241 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 08:25:48.192254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 08:25:48.192273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 08:25:48.192313 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192334 | orchestrator | 2026-04-16 08:25:48.192346 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:25:48.192357 | orchestrator | Thursday 16 April 2026 08:25:14 +0000 (0:00:01.132) 0:39:21.280 ******** 2026-04-16 08:25:48.192368 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-16 08:25:48.192380 | orchestrator | 2026-04-16 08:25:48.192393 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:25:48.192413 | orchestrator | Thursday 16 April 2026 08:25:15 +0000 (0:00:01.093) 0:39:22.374 ******** 2026-04-16 08:25:48.192432 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192451 | orchestrator | 2026-04-16 08:25:48.192470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:25:48.192514 | orchestrator | Thursday 16 April 2026 08:25:16 +0000 (0:00:01.112) 0:39:23.486 ******** 2026-04-16 08:25:48.192527 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192537 | orchestrator | 2026-04-16 08:25:48.192548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:25:48.192559 | orchestrator | Thursday 16 April 2026 08:25:17 +0000 (0:00:01.146) 0:39:24.633 ******** 2026-04-16 08:25:48.192570 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192581 | orchestrator | 2026-04-16 08:25:48.192592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:25:48.192603 | orchestrator | Thursday 16 April 2026 08:25:19 +0000 (0:00:01.186) 0:39:25.820 ******** 2026-04-16 08:25:48.192614 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.192624 | orchestrator | 2026-04-16 08:25:48.192635 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:25:48.192646 | orchestrator | Thursday 16 April 2026 08:25:20 +0000 (0:00:01.212) 0:39:27.032 ******** 2026-04-16 08:25:48.192657 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:25:48.192667 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:25:48.192678 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:25:48.192688 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192699 | orchestrator | 2026-04-16 08:25:48.192710 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:25:48.192721 | orchestrator | Thursday 16 April 2026 08:25:21 +0000 (0:00:01.401) 0:39:28.434 ******** 2026-04-16 08:25:48.192731 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:25:48.192742 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:25:48.192752 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:25:48.192763 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192774 | orchestrator | 2026-04-16 08:25:48.192784 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:25:48.192795 | orchestrator | Thursday 16 April 2026 08:25:23 +0000 (0:00:01.413) 0:39:29.848 ******** 2026-04-16 08:25:48.192806 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:25:48.192816 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:25:48.192827 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:25:48.192837 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.192848 | orchestrator | 2026-04-16 08:25:48.192859 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:25:48.192869 | orchestrator | Thursday 16 April 2026 08:25:24 +0000 (0:00:01.352) 0:39:31.200 ******** 2026-04-16 08:25:48.192880 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.192891 | orchestrator | 2026-04-16 08:25:48.192901 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:25:48.192912 | orchestrator | Thursday 16 April 2026 08:25:25 +0000 (0:00:01.210) 0:39:32.410 ******** 2026-04-16 08:25:48.192922 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 08:25:48.192933 | orchestrator | 2026-04-16 08:25:48.192944 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:25:48.192955 | orchestrator | Thursday 16 April 2026 08:25:26 +0000 (0:00:01.316) 0:39:33.727 ******** 2026-04-16 08:25:48.192987 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:25:48.192999 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:25:48.193010 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:25:48.193020 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:25:48.193031 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-16 08:25:48.193042 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:25:48.193128 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:25:48.193142 | orchestrator | 2026-04-16 08:25:48.193153 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:25:48.193164 | orchestrator | Thursday 16 April 2026 08:25:28 +0000 (0:00:01.773) 0:39:35.500 ******** 2026-04-16 08:25:48.193174 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:25:48.193185 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:25:48.193196 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:25:48.193207 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:25:48.193218 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-16 08:25:48.193235 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:25:48.193247 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:25:48.193258 | orchestrator | 2026-04-16 08:25:48.193268 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-16 08:25:48.193279 | orchestrator | Thursday 16 April 2026 08:25:30 +0000 (0:00:02.193) 0:39:37.694 ******** 2026-04-16 08:25:48.193290 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.193301 | orchestrator | 2026-04-16 08:25:48.193311 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-16 08:25:48.193322 | orchestrator | Thursday 16 April 2026 08:25:32 +0000 (0:00:01.197) 0:39:38.892 ******** 2026-04-16 08:25:48.193333 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.193344 | orchestrator | 2026-04-16 08:25:48.193354 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-16 08:25:48.193365 | orchestrator | Thursday 16 April 2026 08:25:32 +0000 (0:00:00.791) 0:39:39.683 ******** 2026-04-16 08:25:48.193376 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.193386 | orchestrator | 2026-04-16 08:25:48.193397 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-16 08:25:48.193408 | orchestrator | Thursday 16 April 2026 08:25:33 +0000 (0:00:00.859) 0:39:40.542 ******** 2026-04-16 08:25:48.193419 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-16 08:25:48.193430 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-16 08:25:48.193440 | orchestrator | 2026-04-16 08:25:48.193451 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:25:48.193462 | orchestrator | Thursday 16 April 2026 08:25:37 +0000 (0:00:03.931) 0:39:44.474 ******** 2026-04-16 08:25:48.193473 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-16 08:25:48.193484 | orchestrator | 2026-04-16 08:25:48.193495 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:25:48.193506 | orchestrator | Thursday 16 April 2026 08:25:38 +0000 (0:00:01.242) 0:39:45.716 ******** 2026-04-16 08:25:48.193516 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-16 08:25:48.193527 | orchestrator | 2026-04-16 08:25:48.193538 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:25:48.193549 | orchestrator | Thursday 16 April 2026 08:25:40 +0000 (0:00:01.126) 0:39:46.843 ******** 2026-04-16 08:25:48.193560 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.193571 | orchestrator | 2026-04-16 08:25:48.193581 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:25:48.193592 | orchestrator | Thursday 16 April 2026 08:25:41 +0000 (0:00:01.125) 0:39:47.968 ******** 2026-04-16 08:25:48.193603 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.193614 | orchestrator | 2026-04-16 08:25:48.193625 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:25:48.193643 | orchestrator | Thursday 16 April 2026 08:25:42 +0000 (0:00:01.494) 0:39:49.463 ******** 2026-04-16 08:25:48.193654 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.193665 | orchestrator | 2026-04-16 08:25:48.193676 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:25:48.193687 | orchestrator | Thursday 16 April 2026 08:25:44 +0000 (0:00:01.512) 0:39:50.975 ******** 2026-04-16 08:25:48.193698 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:25:48.193708 | orchestrator | 2026-04-16 08:25:48.193719 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:25:48.193730 | orchestrator | Thursday 16 April 2026 08:25:45 +0000 (0:00:01.574) 0:39:52.550 ******** 2026-04-16 08:25:48.193741 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.193752 | orchestrator | 2026-04-16 08:25:48.193763 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:25:48.193774 | orchestrator | Thursday 16 April 2026 08:25:46 +0000 (0:00:01.105) 0:39:53.656 ******** 2026-04-16 08:25:48.193785 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.193795 | orchestrator | 2026-04-16 08:25:48.193806 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:25:48.193817 | orchestrator | Thursday 16 April 2026 08:25:48 +0000 (0:00:01.157) 0:39:54.814 ******** 2026-04-16 08:25:48.193828 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:25:48.193839 | orchestrator | 2026-04-16 08:25:48.193857 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:26:27.346890 | orchestrator | Thursday 16 April 2026 08:25:49 +0000 (0:00:01.117) 0:39:55.931 ******** 2026-04-16 08:26:27.347045 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347145 | orchestrator | 2026-04-16 08:26:27.347161 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:26:27.347174 | orchestrator | Thursday 16 April 2026 08:25:50 +0000 (0:00:01.571) 0:39:57.503 ******** 2026-04-16 08:26:27.347185 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347196 | orchestrator | 2026-04-16 08:26:27.347208 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:26:27.347219 | orchestrator | Thursday 16 April 2026 08:25:52 +0000 (0:00:01.573) 0:39:59.076 ******** 2026-04-16 08:26:27.347231 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347242 | orchestrator | 2026-04-16 08:26:27.347253 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:26:27.347264 | orchestrator | Thursday 16 April 2026 08:25:53 +0000 (0:00:00.747) 0:39:59.823 ******** 2026-04-16 08:26:27.347275 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347286 | orchestrator | 2026-04-16 08:26:27.347297 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:26:27.347308 | orchestrator | Thursday 16 April 2026 08:25:53 +0000 (0:00:00.766) 0:40:00.590 ******** 2026-04-16 08:26:27.347319 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347330 | orchestrator | 2026-04-16 08:26:27.347341 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:26:27.347351 | orchestrator | Thursday 16 April 2026 08:25:54 +0000 (0:00:00.785) 0:40:01.376 ******** 2026-04-16 08:26:27.347362 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347373 | orchestrator | 2026-04-16 08:26:27.347401 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:26:27.347412 | orchestrator | Thursday 16 April 2026 08:25:55 +0000 (0:00:00.789) 0:40:02.165 ******** 2026-04-16 08:26:27.347424 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347434 | orchestrator | 2026-04-16 08:26:27.347446 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:26:27.347457 | orchestrator | Thursday 16 April 2026 08:25:56 +0000 (0:00:00.768) 0:40:02.934 ******** 2026-04-16 08:26:27.347468 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347479 | orchestrator | 2026-04-16 08:26:27.347490 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:26:27.347525 | orchestrator | Thursday 16 April 2026 08:25:56 +0000 (0:00:00.785) 0:40:03.720 ******** 2026-04-16 08:26:27.347537 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347547 | orchestrator | 2026-04-16 08:26:27.347558 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:26:27.347569 | orchestrator | Thursday 16 April 2026 08:25:57 +0000 (0:00:00.784) 0:40:04.505 ******** 2026-04-16 08:26:27.347580 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347591 | orchestrator | 2026-04-16 08:26:27.347601 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:26:27.347612 | orchestrator | Thursday 16 April 2026 08:25:58 +0000 (0:00:00.799) 0:40:05.305 ******** 2026-04-16 08:26:27.347622 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347633 | orchestrator | 2026-04-16 08:26:27.347644 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:26:27.347655 | orchestrator | Thursday 16 April 2026 08:25:59 +0000 (0:00:00.785) 0:40:06.090 ******** 2026-04-16 08:26:27.347666 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.347676 | orchestrator | 2026-04-16 08:26:27.347687 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:26:27.347698 | orchestrator | Thursday 16 April 2026 08:26:00 +0000 (0:00:00.765) 0:40:06.855 ******** 2026-04-16 08:26:27.347709 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347720 | orchestrator | 2026-04-16 08:26:27.347730 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:26:27.347741 | orchestrator | Thursday 16 April 2026 08:26:00 +0000 (0:00:00.783) 0:40:07.638 ******** 2026-04-16 08:26:27.347752 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347762 | orchestrator | 2026-04-16 08:26:27.347773 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:26:27.347784 | orchestrator | Thursday 16 April 2026 08:26:01 +0000 (0:00:00.759) 0:40:08.398 ******** 2026-04-16 08:26:27.347795 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347805 | orchestrator | 2026-04-16 08:26:27.347818 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:26:27.347837 | orchestrator | Thursday 16 April 2026 08:26:02 +0000 (0:00:00.759) 0:40:09.157 ******** 2026-04-16 08:26:27.347855 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347890 | orchestrator | 2026-04-16 08:26:27.347908 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:26:27.347926 | orchestrator | Thursday 16 April 2026 08:26:03 +0000 (0:00:00.771) 0:40:09.929 ******** 2026-04-16 08:26:27.347946 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.347964 | orchestrator | 2026-04-16 08:26:27.347982 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:26:27.348002 | orchestrator | Thursday 16 April 2026 08:26:03 +0000 (0:00:00.756) 0:40:10.686 ******** 2026-04-16 08:26:27.348022 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348042 | orchestrator | 2026-04-16 08:26:27.348086 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:26:27.348103 | orchestrator | Thursday 16 April 2026 08:26:04 +0000 (0:00:00.752) 0:40:11.438 ******** 2026-04-16 08:26:27.348118 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348136 | orchestrator | 2026-04-16 08:26:27.348154 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:26:27.348175 | orchestrator | Thursday 16 April 2026 08:26:05 +0000 (0:00:00.785) 0:40:12.223 ******** 2026-04-16 08:26:27.348194 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348213 | orchestrator | 2026-04-16 08:26:27.348232 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:26:27.348244 | orchestrator | Thursday 16 April 2026 08:26:06 +0000 (0:00:00.781) 0:40:13.005 ******** 2026-04-16 08:26:27.348279 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348291 | orchestrator | 2026-04-16 08:26:27.348302 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:26:27.348326 | orchestrator | Thursday 16 April 2026 08:26:07 +0000 (0:00:00.761) 0:40:13.766 ******** 2026-04-16 08:26:27.348336 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348347 | orchestrator | 2026-04-16 08:26:27.348358 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:26:27.348369 | orchestrator | Thursday 16 April 2026 08:26:07 +0000 (0:00:00.755) 0:40:14.521 ******** 2026-04-16 08:26:27.348380 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348391 | orchestrator | 2026-04-16 08:26:27.348401 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:26:27.348412 | orchestrator | Thursday 16 April 2026 08:26:08 +0000 (0:00:00.766) 0:40:15.288 ******** 2026-04-16 08:26:27.348423 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348434 | orchestrator | 2026-04-16 08:26:27.348445 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:26:27.348461 | orchestrator | Thursday 16 April 2026 08:26:09 +0000 (0:00:00.769) 0:40:16.057 ******** 2026-04-16 08:26:27.348486 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.348509 | orchestrator | 2026-04-16 08:26:27.348528 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:26:27.348546 | orchestrator | Thursday 16 April 2026 08:26:10 +0000 (0:00:01.610) 0:40:17.668 ******** 2026-04-16 08:26:27.348563 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.348580 | orchestrator | 2026-04-16 08:26:27.348597 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:26:27.348628 | orchestrator | Thursday 16 April 2026 08:26:12 +0000 (0:00:01.836) 0:40:19.504 ******** 2026-04-16 08:26:27.348647 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-16 08:26:27.348669 | orchestrator | 2026-04-16 08:26:27.348688 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:26:27.348705 | orchestrator | Thursday 16 April 2026 08:26:13 +0000 (0:00:01.176) 0:40:20.681 ******** 2026-04-16 08:26:27.348725 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348743 | orchestrator | 2026-04-16 08:26:27.348761 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:26:27.348780 | orchestrator | Thursday 16 April 2026 08:26:15 +0000 (0:00:01.109) 0:40:21.791 ******** 2026-04-16 08:26:27.348791 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348802 | orchestrator | 2026-04-16 08:26:27.348812 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:26:27.348823 | orchestrator | Thursday 16 April 2026 08:26:16 +0000 (0:00:01.114) 0:40:22.905 ******** 2026-04-16 08:26:27.348834 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:26:27.348845 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:26:27.348855 | orchestrator | 2026-04-16 08:26:27.348866 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:26:27.348877 | orchestrator | Thursday 16 April 2026 08:26:17 +0000 (0:00:01.790) 0:40:24.695 ******** 2026-04-16 08:26:27.348888 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.348898 | orchestrator | 2026-04-16 08:26:27.348909 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:26:27.348920 | orchestrator | Thursday 16 April 2026 08:26:19 +0000 (0:00:01.451) 0:40:26.146 ******** 2026-04-16 08:26:27.348930 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348941 | orchestrator | 2026-04-16 08:26:27.348952 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:26:27.348963 | orchestrator | Thursday 16 April 2026 08:26:20 +0000 (0:00:01.132) 0:40:27.279 ******** 2026-04-16 08:26:27.348973 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.348984 | orchestrator | 2026-04-16 08:26:27.348995 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:26:27.349005 | orchestrator | Thursday 16 April 2026 08:26:21 +0000 (0:00:00.779) 0:40:28.059 ******** 2026-04-16 08:26:27.349028 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.349039 | orchestrator | 2026-04-16 08:26:27.349050 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:26:27.349116 | orchestrator | Thursday 16 April 2026 08:26:22 +0000 (0:00:00.775) 0:40:28.834 ******** 2026-04-16 08:26:27.349128 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-16 08:26:27.349139 | orchestrator | 2026-04-16 08:26:27.349150 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:26:27.349161 | orchestrator | Thursday 16 April 2026 08:26:23 +0000 (0:00:01.132) 0:40:29.967 ******** 2026-04-16 08:26:27.349172 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:26:27.349183 | orchestrator | 2026-04-16 08:26:27.349194 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:26:27.349205 | orchestrator | Thursday 16 April 2026 08:26:24 +0000 (0:00:01.764) 0:40:31.732 ******** 2026-04-16 08:26:27.349216 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:26:27.349226 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:26:27.349237 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:26:27.349248 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.349259 | orchestrator | 2026-04-16 08:26:27.349270 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:26:27.349281 | orchestrator | Thursday 16 April 2026 08:26:26 +0000 (0:00:01.168) 0:40:32.900 ******** 2026-04-16 08:26:27.349292 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:26:27.349303 | orchestrator | 2026-04-16 08:26:27.349314 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:26:27.349325 | orchestrator | Thursday 16 April 2026 08:26:27 +0000 (0:00:01.111) 0:40:34.012 ******** 2026-04-16 08:26:27.349348 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363179 | orchestrator | 2026-04-16 08:27:10.363274 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:27:10.363286 | orchestrator | Thursday 16 April 2026 08:26:28 +0000 (0:00:01.148) 0:40:35.161 ******** 2026-04-16 08:27:10.363293 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363300 | orchestrator | 2026-04-16 08:27:10.363307 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:27:10.363313 | orchestrator | Thursday 16 April 2026 08:26:29 +0000 (0:00:01.127) 0:40:36.288 ******** 2026-04-16 08:27:10.363320 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363327 | orchestrator | 2026-04-16 08:27:10.363333 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:27:10.363340 | orchestrator | Thursday 16 April 2026 08:26:30 +0000 (0:00:01.168) 0:40:37.456 ******** 2026-04-16 08:27:10.363346 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363353 | orchestrator | 2026-04-16 08:27:10.363359 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:27:10.363365 | orchestrator | Thursday 16 April 2026 08:26:31 +0000 (0:00:00.775) 0:40:38.232 ******** 2026-04-16 08:27:10.363371 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:10.363379 | orchestrator | 2026-04-16 08:27:10.363385 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:27:10.363392 | orchestrator | Thursday 16 April 2026 08:26:33 +0000 (0:00:02.232) 0:40:40.465 ******** 2026-04-16 08:27:10.363398 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:10.363404 | orchestrator | 2026-04-16 08:27:10.363411 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:27:10.363430 | orchestrator | Thursday 16 April 2026 08:26:34 +0000 (0:00:00.781) 0:40:41.246 ******** 2026-04-16 08:27:10.363437 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-16 08:27:10.363443 | orchestrator | 2026-04-16 08:27:10.363466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:27:10.363473 | orchestrator | Thursday 16 April 2026 08:26:35 +0000 (0:00:01.115) 0:40:42.361 ******** 2026-04-16 08:27:10.363479 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363486 | orchestrator | 2026-04-16 08:27:10.363492 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:27:10.363498 | orchestrator | Thursday 16 April 2026 08:26:36 +0000 (0:00:01.142) 0:40:43.504 ******** 2026-04-16 08:27:10.363505 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363511 | orchestrator | 2026-04-16 08:27:10.363517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:27:10.363524 | orchestrator | Thursday 16 April 2026 08:26:37 +0000 (0:00:01.126) 0:40:44.631 ******** 2026-04-16 08:27:10.363530 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363536 | orchestrator | 2026-04-16 08:27:10.363543 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:27:10.363549 | orchestrator | Thursday 16 April 2026 08:26:39 +0000 (0:00:01.150) 0:40:45.782 ******** 2026-04-16 08:27:10.363555 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363562 | orchestrator | 2026-04-16 08:27:10.363568 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:27:10.363574 | orchestrator | Thursday 16 April 2026 08:26:40 +0000 (0:00:01.123) 0:40:46.905 ******** 2026-04-16 08:27:10.363581 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363587 | orchestrator | 2026-04-16 08:27:10.363593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:27:10.363600 | orchestrator | Thursday 16 April 2026 08:26:41 +0000 (0:00:01.143) 0:40:48.049 ******** 2026-04-16 08:27:10.363606 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363612 | orchestrator | 2026-04-16 08:27:10.363619 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:27:10.363625 | orchestrator | Thursday 16 April 2026 08:26:42 +0000 (0:00:01.126) 0:40:49.175 ******** 2026-04-16 08:27:10.363631 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363637 | orchestrator | 2026-04-16 08:27:10.363644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:27:10.363650 | orchestrator | Thursday 16 April 2026 08:26:43 +0000 (0:00:01.140) 0:40:50.316 ******** 2026-04-16 08:27:10.363656 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363663 | orchestrator | 2026-04-16 08:27:10.363669 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:27:10.363675 | orchestrator | Thursday 16 April 2026 08:26:44 +0000 (0:00:01.135) 0:40:51.451 ******** 2026-04-16 08:27:10.363681 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:10.363688 | orchestrator | 2026-04-16 08:27:10.363694 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:27:10.363701 | orchestrator | Thursday 16 April 2026 08:26:45 +0000 (0:00:00.825) 0:40:52.277 ******** 2026-04-16 08:27:10.363708 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-16 08:27:10.363717 | orchestrator | 2026-04-16 08:27:10.363724 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:27:10.363732 | orchestrator | Thursday 16 April 2026 08:26:46 +0000 (0:00:01.101) 0:40:53.379 ******** 2026-04-16 08:27:10.363739 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-16 08:27:10.363747 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-16 08:27:10.363754 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-16 08:27:10.363762 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-16 08:27:10.363769 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-16 08:27:10.363777 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-16 08:27:10.363784 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-16 08:27:10.363791 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:27:10.363805 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:27:10.363826 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:27:10.363833 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:27:10.363839 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:27:10.363846 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:27:10.363852 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:27:10.363858 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-16 08:27:10.363864 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-16 08:27:10.363871 | orchestrator | 2026-04-16 08:27:10.363877 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:27:10.363883 | orchestrator | Thursday 16 April 2026 08:26:53 +0000 (0:00:06.402) 0:40:59.781 ******** 2026-04-16 08:27:10.363889 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-16 08:27:10.363896 | orchestrator | 2026-04-16 08:27:10.363902 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:27:10.363908 | orchestrator | Thursday 16 April 2026 08:26:54 +0000 (0:00:01.104) 0:41:00.885 ******** 2026-04-16 08:27:10.363915 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:27:10.363922 | orchestrator | 2026-04-16 08:27:10.363928 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:27:10.363939 | orchestrator | Thursday 16 April 2026 08:26:55 +0000 (0:00:01.517) 0:41:02.403 ******** 2026-04-16 08:27:10.363945 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:27:10.363951 | orchestrator | 2026-04-16 08:27:10.363958 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:27:10.363964 | orchestrator | Thursday 16 April 2026 08:26:57 +0000 (0:00:01.673) 0:41:04.077 ******** 2026-04-16 08:27:10.363970 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.363976 | orchestrator | 2026-04-16 08:27:10.363983 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:27:10.363989 | orchestrator | Thursday 16 April 2026 08:26:58 +0000 (0:00:00.815) 0:41:04.892 ******** 2026-04-16 08:27:10.363995 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364001 | orchestrator | 2026-04-16 08:27:10.364008 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:27:10.364014 | orchestrator | Thursday 16 April 2026 08:26:58 +0000 (0:00:00.742) 0:41:05.635 ******** 2026-04-16 08:27:10.364020 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364026 | orchestrator | 2026-04-16 08:27:10.364033 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:27:10.364039 | orchestrator | Thursday 16 April 2026 08:26:59 +0000 (0:00:00.786) 0:41:06.422 ******** 2026-04-16 08:27:10.364045 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364051 | orchestrator | 2026-04-16 08:27:10.364076 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:27:10.364083 | orchestrator | Thursday 16 April 2026 08:27:00 +0000 (0:00:00.781) 0:41:07.204 ******** 2026-04-16 08:27:10.364089 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364095 | orchestrator | 2026-04-16 08:27:10.364102 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:27:10.364108 | orchestrator | Thursday 16 April 2026 08:27:01 +0000 (0:00:00.771) 0:41:07.975 ******** 2026-04-16 08:27:10.364114 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364121 | orchestrator | 2026-04-16 08:27:10.364127 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:27:10.364138 | orchestrator | Thursday 16 April 2026 08:27:01 +0000 (0:00:00.758) 0:41:08.733 ******** 2026-04-16 08:27:10.364144 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364150 | orchestrator | 2026-04-16 08:27:10.364157 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:27:10.364163 | orchestrator | Thursday 16 April 2026 08:27:02 +0000 (0:00:00.759) 0:41:09.493 ******** 2026-04-16 08:27:10.364169 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364175 | orchestrator | 2026-04-16 08:27:10.364182 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:27:10.364188 | orchestrator | Thursday 16 April 2026 08:27:03 +0000 (0:00:00.749) 0:41:10.242 ******** 2026-04-16 08:27:10.364194 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364200 | orchestrator | 2026-04-16 08:27:10.364207 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:27:10.364213 | orchestrator | Thursday 16 April 2026 08:27:04 +0000 (0:00:00.745) 0:41:10.988 ******** 2026-04-16 08:27:10.364219 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:10.364225 | orchestrator | 2026-04-16 08:27:10.364231 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:27:10.364238 | orchestrator | Thursday 16 April 2026 08:27:05 +0000 (0:00:00.774) 0:41:11.762 ******** 2026-04-16 08:27:10.364244 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:10.364250 | orchestrator | 2026-04-16 08:27:10.364256 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:27:10.364263 | orchestrator | Thursday 16 April 2026 08:27:05 +0000 (0:00:00.866) 0:41:12.629 ******** 2026-04-16 08:27:10.364269 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:27:10.364275 | orchestrator | 2026-04-16 08:27:10.364281 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:27:10.364287 | orchestrator | Thursday 16 April 2026 08:27:10 +0000 (0:00:04.376) 0:41:17.006 ******** 2026-04-16 08:27:10.364298 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:27:51.093027 | orchestrator | 2026-04-16 08:27:51.093196 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:27:51.093220 | orchestrator | Thursday 16 April 2026 08:27:11 +0000 (0:00:00.832) 0:41:17.839 ******** 2026-04-16 08:27:51.093234 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-16 08:27:51.093248 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-16 08:27:51.093259 | orchestrator | 2026-04-16 08:27:51.093270 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:27:51.093280 | orchestrator | Thursday 16 April 2026 08:27:18 +0000 (0:00:07.536) 0:41:25.375 ******** 2026-04-16 08:27:51.093290 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093301 | orchestrator | 2026-04-16 08:27:51.093327 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:27:51.093337 | orchestrator | Thursday 16 April 2026 08:27:19 +0000 (0:00:00.760) 0:41:26.136 ******** 2026-04-16 08:27:51.093347 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093357 | orchestrator | 2026-04-16 08:27:51.093367 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:27:51.093400 | orchestrator | Thursday 16 April 2026 08:27:20 +0000 (0:00:00.765) 0:41:26.902 ******** 2026-04-16 08:27:51.093411 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093420 | orchestrator | 2026-04-16 08:27:51.093430 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:27:51.093440 | orchestrator | Thursday 16 April 2026 08:27:20 +0000 (0:00:00.815) 0:41:27.718 ******** 2026-04-16 08:27:51.093449 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093459 | orchestrator | 2026-04-16 08:27:51.093468 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:27:51.093479 | orchestrator | Thursday 16 April 2026 08:27:21 +0000 (0:00:00.780) 0:41:28.499 ******** 2026-04-16 08:27:51.093495 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093519 | orchestrator | 2026-04-16 08:27:51.093538 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:27:51.093554 | orchestrator | Thursday 16 April 2026 08:27:22 +0000 (0:00:00.775) 0:41:29.274 ******** 2026-04-16 08:27:51.093571 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.093588 | orchestrator | 2026-04-16 08:27:51.093603 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:27:51.093617 | orchestrator | Thursday 16 April 2026 08:27:23 +0000 (0:00:00.884) 0:41:30.159 ******** 2026-04-16 08:27:51.093633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:27:51.093650 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:27:51.093665 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:27:51.093681 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093698 | orchestrator | 2026-04-16 08:27:51.093715 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:27:51.093730 | orchestrator | Thursday 16 April 2026 08:27:24 +0000 (0:00:01.055) 0:41:31.214 ******** 2026-04-16 08:27:51.093745 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:27:51.093762 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:27:51.093779 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:27:51.093794 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093810 | orchestrator | 2026-04-16 08:27:51.093825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:27:51.093843 | orchestrator | Thursday 16 April 2026 08:27:25 +0000 (0:00:01.080) 0:41:32.295 ******** 2026-04-16 08:27:51.093860 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:27:51.093908 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:27:51.093930 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:27:51.093940 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.093949 | orchestrator | 2026-04-16 08:27:51.093959 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:27:51.093969 | orchestrator | Thursday 16 April 2026 08:27:26 +0000 (0:00:01.032) 0:41:33.327 ******** 2026-04-16 08:27:51.093978 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.093988 | orchestrator | 2026-04-16 08:27:51.093997 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:27:51.094007 | orchestrator | Thursday 16 April 2026 08:27:27 +0000 (0:00:00.801) 0:41:34.129 ******** 2026-04-16 08:27:51.094085 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 08:27:51.094097 | orchestrator | 2026-04-16 08:27:51.094106 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:27:51.094149 | orchestrator | Thursday 16 April 2026 08:27:28 +0000 (0:00:01.017) 0:41:35.146 ******** 2026-04-16 08:27:51.094160 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.094170 | orchestrator | 2026-04-16 08:27:51.094179 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-16 08:27:51.094189 | orchestrator | Thursday 16 April 2026 08:27:29 +0000 (0:00:01.404) 0:41:36.551 ******** 2026-04-16 08:27:51.094213 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.094223 | orchestrator | 2026-04-16 08:27:51.094254 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:27:51.094265 | orchestrator | Thursday 16 April 2026 08:27:30 +0000 (0:00:00.805) 0:41:37.357 ******** 2026-04-16 08:27:51.094274 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:27:51.094285 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:27:51.094294 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:27:51.094304 | orchestrator | 2026-04-16 08:27:51.094314 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-16 08:27:51.094323 | orchestrator | Thursday 16 April 2026 08:27:31 +0000 (0:00:01.286) 0:41:38.643 ******** 2026-04-16 08:27:51.094333 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-04-16 08:27:51.094342 | orchestrator | 2026-04-16 08:27:51.094352 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-16 08:27:51.094362 | orchestrator | Thursday 16 April 2026 08:27:32 +0000 (0:00:01.089) 0:41:39.732 ******** 2026-04-16 08:27:51.094371 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.094381 | orchestrator | 2026-04-16 08:27:51.094390 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-16 08:27:51.094400 | orchestrator | Thursday 16 April 2026 08:27:34 +0000 (0:00:01.191) 0:41:40.924 ******** 2026-04-16 08:27:51.094409 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.094419 | orchestrator | 2026-04-16 08:27:51.094436 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-16 08:27:51.094446 | orchestrator | Thursday 16 April 2026 08:27:35 +0000 (0:00:01.095) 0:41:42.020 ******** 2026-04-16 08:27:51.094456 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.094465 | orchestrator | 2026-04-16 08:27:51.094475 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-16 08:27:51.094485 | orchestrator | Thursday 16 April 2026 08:27:36 +0000 (0:00:01.426) 0:41:43.447 ******** 2026-04-16 08:27:51.094495 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.094504 | orchestrator | 2026-04-16 08:27:51.094514 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-16 08:27:51.094524 | orchestrator | Thursday 16 April 2026 08:27:37 +0000 (0:00:01.158) 0:41:44.605 ******** 2026-04-16 08:27:51.094533 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-16 08:27:51.094544 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-16 08:27:51.094553 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-16 08:27:51.094563 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-16 08:27:51.094572 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-16 08:27:51.094582 | orchestrator | 2026-04-16 08:27:51.094592 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-16 08:27:51.094601 | orchestrator | Thursday 16 April 2026 08:27:40 +0000 (0:00:02.528) 0:41:47.133 ******** 2026-04-16 08:27:51.094611 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.094620 | orchestrator | 2026-04-16 08:27:51.094634 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-16 08:27:51.094649 | orchestrator | Thursday 16 April 2026 08:27:41 +0000 (0:00:00.788) 0:41:47.922 ******** 2026-04-16 08:27:51.094665 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-04-16 08:27:51.094681 | orchestrator | 2026-04-16 08:27:51.094696 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-16 08:27:51.094713 | orchestrator | Thursday 16 April 2026 08:27:42 +0000 (0:00:01.075) 0:41:48.998 ******** 2026-04-16 08:27:51.094741 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-16 08:27:51.094757 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-04-16 08:27:51.094773 | orchestrator | 2026-04-16 08:27:51.094789 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-16 08:27:51.094803 | orchestrator | Thursday 16 April 2026 08:27:44 +0000 (0:00:01.839) 0:41:50.837 ******** 2026-04-16 08:27:51.094816 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:27:51.094833 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 08:27:51.094848 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:27:51.094863 | orchestrator | 2026-04-16 08:27:51.094879 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:27:51.094895 | orchestrator | Thursday 16 April 2026 08:27:47 +0000 (0:00:03.583) 0:41:54.421 ******** 2026-04-16 08:27:51.094912 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 08:27:51.094929 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 08:27:51.094946 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:27:51.094964 | orchestrator | 2026-04-16 08:27:51.094980 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-16 08:27:51.094995 | orchestrator | Thursday 16 April 2026 08:27:49 +0000 (0:00:01.635) 0:41:56.056 ******** 2026-04-16 08:27:51.095012 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.095028 | orchestrator | 2026-04-16 08:27:51.095045 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-16 08:27:51.095061 | orchestrator | Thursday 16 April 2026 08:27:50 +0000 (0:00:00.855) 0:41:56.912 ******** 2026-04-16 08:27:51.095077 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.095088 | orchestrator | 2026-04-16 08:27:51.095097 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-16 08:27:51.095107 | orchestrator | Thursday 16 April 2026 08:27:50 +0000 (0:00:00.787) 0:41:57.699 ******** 2026-04-16 08:27:51.095117 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:27:51.095184 | orchestrator | 2026-04-16 08:27:51.095206 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-16 08:28:53.892360 | orchestrator | Thursday 16 April 2026 08:27:51 +0000 (0:00:00.766) 0:41:58.466 ******** 2026-04-16 08:28:53.892482 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-04-16 08:28:53.892500 | orchestrator | 2026-04-16 08:28:53.892513 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-16 08:28:53.892525 | orchestrator | Thursday 16 April 2026 08:27:52 +0000 (0:00:01.118) 0:41:59.585 ******** 2026-04-16 08:28:53.892537 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:28:53.892549 | orchestrator | 2026-04-16 08:28:53.892560 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-16 08:28:53.892571 | orchestrator | Thursday 16 April 2026 08:27:54 +0000 (0:00:01.485) 0:42:01.071 ******** 2026-04-16 08:28:53.892582 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:28:53.892593 | orchestrator | 2026-04-16 08:28:53.892604 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-16 08:28:53.892615 | orchestrator | Thursday 16 April 2026 08:27:57 +0000 (0:00:03.540) 0:42:04.611 ******** 2026-04-16 08:28:53.892626 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-04-16 08:28:53.892637 | orchestrator | 2026-04-16 08:28:53.892648 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-16 08:28:53.892658 | orchestrator | Thursday 16 April 2026 08:27:58 +0000 (0:00:01.110) 0:42:05.721 ******** 2026-04-16 08:28:53.892669 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:28:53.892680 | orchestrator | 2026-04-16 08:28:53.892707 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-16 08:28:53.892721 | orchestrator | Thursday 16 April 2026 08:28:00 +0000 (0:00:01.985) 0:42:07.706 ******** 2026-04-16 08:28:53.892744 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:28:53.892801 | orchestrator | 2026-04-16 08:28:53.892820 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-16 08:28:53.892840 | orchestrator | Thursday 16 April 2026 08:28:02 +0000 (0:00:01.974) 0:42:09.681 ******** 2026-04-16 08:28:53.892859 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:28:53.892879 | orchestrator | 2026-04-16 08:28:53.892899 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-16 08:28:53.892917 | orchestrator | Thursday 16 April 2026 08:28:05 +0000 (0:00:02.208) 0:42:11.889 ******** 2026-04-16 08:28:53.892936 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.892952 | orchestrator | 2026-04-16 08:28:53.892964 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-16 08:28:53.892975 | orchestrator | Thursday 16 April 2026 08:28:06 +0000 (0:00:01.129) 0:42:13.019 ******** 2026-04-16 08:28:53.892985 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.892996 | orchestrator | 2026-04-16 08:28:53.893008 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-16 08:28:53.893019 | orchestrator | Thursday 16 April 2026 08:28:07 +0000 (0:00:01.142) 0:42:14.162 ******** 2026-04-16 08:28:53.893030 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-16 08:28:53.893041 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-16 08:28:53.893052 | orchestrator | 2026-04-16 08:28:53.893063 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-16 08:28:53.893073 | orchestrator | Thursday 16 April 2026 08:28:09 +0000 (0:00:01.863) 0:42:16.025 ******** 2026-04-16 08:28:53.893084 | orchestrator | ok: [testbed-node-4] => (item=2) 2026-04-16 08:28:53.893095 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-04-16 08:28:53.893106 | orchestrator | 2026-04-16 08:28:53.893117 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-16 08:28:53.893127 | orchestrator | Thursday 16 April 2026 08:28:12 +0000 (0:00:02.872) 0:42:18.898 ******** 2026-04-16 08:28:53.893138 | orchestrator | changed: [testbed-node-4] => (item=2) 2026-04-16 08:28:53.893149 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-04-16 08:28:53.893160 | orchestrator | 2026-04-16 08:28:53.893170 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-16 08:28:53.893200 | orchestrator | Thursday 16 April 2026 08:28:16 +0000 (0:00:04.453) 0:42:23.352 ******** 2026-04-16 08:28:53.893287 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.893301 | orchestrator | 2026-04-16 08:28:53.893312 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-16 08:28:53.893322 | orchestrator | Thursday 16 April 2026 08:28:17 +0000 (0:00:00.856) 0:42:24.208 ******** 2026-04-16 08:28:53.893333 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.893344 | orchestrator | 2026-04-16 08:28:53.893354 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-16 08:28:53.893365 | orchestrator | Thursday 16 April 2026 08:28:18 +0000 (0:00:00.875) 0:42:25.084 ******** 2026-04-16 08:28:53.893376 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.893387 | orchestrator | 2026-04-16 08:28:53.893397 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-16 08:28:53.893408 | orchestrator | Thursday 16 April 2026 08:28:19 +0000 (0:00:00.848) 0:42:25.933 ******** 2026-04-16 08:28:53.893419 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.893430 | orchestrator | 2026-04-16 08:28:53.893440 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-16 08:28:53.893460 | orchestrator | Thursday 16 April 2026 08:28:19 +0000 (0:00:00.765) 0:42:26.698 ******** 2026-04-16 08:28:53.893480 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:28:53.893498 | orchestrator | 2026-04-16 08:28:53.893519 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-16 08:28:53.893537 | orchestrator | Thursday 16 April 2026 08:28:20 +0000 (0:00:00.761) 0:42:27.460 ******** 2026-04-16 08:28:53.893556 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-16 08:28:53.893593 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-04-16 08:28:53.893615 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-04-16 08:28:53.893662 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-04-16 08:28:53.893675 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:28:53.893686 | orchestrator | 2026-04-16 08:28:53.893697 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-04-16 08:28:53.893708 | orchestrator | 2026-04-16 08:28:53.893719 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:28:53.893730 | orchestrator | Thursday 16 April 2026 08:28:34 +0000 (0:00:13.890) 0:42:41.350 ******** 2026-04-16 08:28:53.893741 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-16 08:28:53.893751 | orchestrator | 2026-04-16 08:28:53.893762 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:28:53.893773 | orchestrator | Thursday 16 April 2026 08:28:35 +0000 (0:00:01.222) 0:42:42.572 ******** 2026-04-16 08:28:53.893783 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.893794 | orchestrator | 2026-04-16 08:28:53.893805 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:28:53.893816 | orchestrator | Thursday 16 April 2026 08:28:37 +0000 (0:00:01.459) 0:42:44.032 ******** 2026-04-16 08:28:53.893827 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.893838 | orchestrator | 2026-04-16 08:28:53.893848 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:28:53.893868 | orchestrator | Thursday 16 April 2026 08:28:38 +0000 (0:00:01.160) 0:42:45.192 ******** 2026-04-16 08:28:53.893882 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.893901 | orchestrator | 2026-04-16 08:28:53.893918 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:28:53.893936 | orchestrator | Thursday 16 April 2026 08:28:39 +0000 (0:00:01.441) 0:42:46.634 ******** 2026-04-16 08:28:53.893953 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.893969 | orchestrator | 2026-04-16 08:28:53.893987 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:28:53.894003 | orchestrator | Thursday 16 April 2026 08:28:41 +0000 (0:00:01.130) 0:42:47.764 ******** 2026-04-16 08:28:53.894099 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.894120 | orchestrator | 2026-04-16 08:28:53.894140 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:28:53.894152 | orchestrator | Thursday 16 April 2026 08:28:42 +0000 (0:00:01.129) 0:42:48.894 ******** 2026-04-16 08:28:53.894162 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.894173 | orchestrator | 2026-04-16 08:28:53.894184 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:28:53.894195 | orchestrator | Thursday 16 April 2026 08:28:43 +0000 (0:00:01.122) 0:42:50.017 ******** 2026-04-16 08:28:53.894205 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:28:53.894238 | orchestrator | 2026-04-16 08:28:53.894249 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:28:53.894260 | orchestrator | Thursday 16 April 2026 08:28:44 +0000 (0:00:01.123) 0:42:51.140 ******** 2026-04-16 08:28:53.894271 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.894282 | orchestrator | 2026-04-16 08:28:53.894293 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:28:53.894303 | orchestrator | Thursday 16 April 2026 08:28:45 +0000 (0:00:01.126) 0:42:52.267 ******** 2026-04-16 08:28:53.894314 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:28:53.894325 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:28:53.894336 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:28:53.894357 | orchestrator | 2026-04-16 08:28:53.894369 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:28:53.894379 | orchestrator | Thursday 16 April 2026 08:28:47 +0000 (0:00:01.922) 0:42:54.189 ******** 2026-04-16 08:28:53.894390 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:28:53.894401 | orchestrator | 2026-04-16 08:28:53.894412 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:28:53.894423 | orchestrator | Thursday 16 April 2026 08:28:48 +0000 (0:00:01.241) 0:42:55.431 ******** 2026-04-16 08:28:53.894433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:28:53.894444 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:28:53.894455 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:28:53.894466 | orchestrator | 2026-04-16 08:28:53.894477 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:28:53.894488 | orchestrator | Thursday 16 April 2026 08:28:51 +0000 (0:00:03.164) 0:42:58.595 ******** 2026-04-16 08:28:53.894499 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 08:28:53.894510 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 08:28:53.894521 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 08:28:53.894532 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:28:53.894543 | orchestrator | 2026-04-16 08:28:53.894554 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:28:53.894565 | orchestrator | Thursday 16 April 2026 08:28:53 +0000 (0:00:01.671) 0:43:00.267 ******** 2026-04-16 08:28:53.894578 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:28:53.894603 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:29:14.525082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:29:14.525198 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525216 | orchestrator | 2026-04-16 08:29:14.525228 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:29:14.525275 | orchestrator | Thursday 16 April 2026 08:28:55 +0000 (0:00:01.569) 0:43:01.836 ******** 2026-04-16 08:29:14.525290 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:14.525322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:14.525335 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:14.525369 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525381 | orchestrator | 2026-04-16 08:29:14.525392 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:29:14.525403 | orchestrator | Thursday 16 April 2026 08:28:56 +0000 (0:00:01.169) 0:43:03.005 ******** 2026-04-16 08:29:14.525416 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:28:49.530799', 'end': '2026-04-16 08:28:49.565721', 'delta': '0:00:00.034922', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:29:14.525431 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:28:50.064510', 'end': '2026-04-16 08:28:50.103698', 'delta': '0:00:00.039188', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:29:14.525461 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:28:50.667325', 'end': '2026-04-16 08:28:50.722248', 'delta': '0:00:00.054923', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:29:14.525473 | orchestrator | 2026-04-16 08:29:14.525485 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:29:14.525496 | orchestrator | Thursday 16 April 2026 08:28:57 +0000 (0:00:01.244) 0:43:04.250 ******** 2026-04-16 08:29:14.525507 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:14.525519 | orchestrator | 2026-04-16 08:29:14.525530 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:29:14.525541 | orchestrator | Thursday 16 April 2026 08:28:58 +0000 (0:00:01.260) 0:43:05.510 ******** 2026-04-16 08:29:14.525552 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525563 | orchestrator | 2026-04-16 08:29:14.525574 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:29:14.525585 | orchestrator | Thursday 16 April 2026 08:29:00 +0000 (0:00:01.251) 0:43:06.762 ******** 2026-04-16 08:29:14.525596 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:14.525606 | orchestrator | 2026-04-16 08:29:14.525619 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:29:14.525632 | orchestrator | Thursday 16 April 2026 08:29:01 +0000 (0:00:01.118) 0:43:07.880 ******** 2026-04-16 08:29:14.525652 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:29:14.525666 | orchestrator | 2026-04-16 08:29:14.525684 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:29:14.525695 | orchestrator | Thursday 16 April 2026 08:29:03 +0000 (0:00:01.961) 0:43:09.842 ******** 2026-04-16 08:29:14.525706 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:14.525717 | orchestrator | 2026-04-16 08:29:14.525728 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:29:14.525739 | orchestrator | Thursday 16 April 2026 08:29:04 +0000 (0:00:01.100) 0:43:10.943 ******** 2026-04-16 08:29:14.525750 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525761 | orchestrator | 2026-04-16 08:29:14.525772 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:29:14.525783 | orchestrator | Thursday 16 April 2026 08:29:05 +0000 (0:00:01.080) 0:43:12.023 ******** 2026-04-16 08:29:14.525794 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525805 | orchestrator | 2026-04-16 08:29:14.525815 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:29:14.525826 | orchestrator | Thursday 16 April 2026 08:29:06 +0000 (0:00:01.181) 0:43:13.204 ******** 2026-04-16 08:29:14.525837 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525848 | orchestrator | 2026-04-16 08:29:14.525859 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:29:14.525869 | orchestrator | Thursday 16 April 2026 08:29:07 +0000 (0:00:01.100) 0:43:14.305 ******** 2026-04-16 08:29:14.525880 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525891 | orchestrator | 2026-04-16 08:29:14.525902 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:29:14.525913 | orchestrator | Thursday 16 April 2026 08:29:08 +0000 (0:00:01.114) 0:43:15.420 ******** 2026-04-16 08:29:14.525924 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:14.525935 | orchestrator | 2026-04-16 08:29:14.525945 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:29:14.525956 | orchestrator | Thursday 16 April 2026 08:29:09 +0000 (0:00:01.155) 0:43:16.576 ******** 2026-04-16 08:29:14.525967 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.525978 | orchestrator | 2026-04-16 08:29:14.525989 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:29:14.526000 | orchestrator | Thursday 16 April 2026 08:29:10 +0000 (0:00:01.148) 0:43:17.725 ******** 2026-04-16 08:29:14.526011 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:14.526099 | orchestrator | 2026-04-16 08:29:14.526111 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:29:14.526122 | orchestrator | Thursday 16 April 2026 08:29:12 +0000 (0:00:01.189) 0:43:18.914 ******** 2026-04-16 08:29:14.526132 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:14.526143 | orchestrator | 2026-04-16 08:29:14.526154 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:29:14.526166 | orchestrator | Thursday 16 April 2026 08:29:13 +0000 (0:00:01.082) 0:43:19.997 ******** 2026-04-16 08:29:14.526177 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:14.526188 | orchestrator | 2026-04-16 08:29:14.526209 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:29:14.526220 | orchestrator | Thursday 16 April 2026 08:29:14 +0000 (0:00:01.171) 0:43:21.168 ******** 2026-04-16 08:29:14.526423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:14.526474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}})  2026-04-16 08:29:14.641763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:29:14.641883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}})  2026-04-16 08:29:14.641902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:14.641916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:14.641928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:29:14.641941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:14.641973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:29:14.642004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:14.642077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}})  2026-04-16 08:29:14.642092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}})  2026-04-16 08:29:14.642103 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:14.642128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:29:15.954526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:15.954646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:29:15.954665 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:29:15.954682 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:15.954695 | orchestrator | 2026-04-16 08:29:15.954707 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:29:15.954719 | orchestrator | Thursday 16 April 2026 08:29:15 +0000 (0:00:01.344) 0:43:22.513 ******** 2026-04-16 08:29:15.954732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954746 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954792 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954849 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954875 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954895 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:15.954983 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.307788 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.307903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.307922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.307959 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.308001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.308014 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.308026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.308046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:29:21.308087 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:29:21.308100 | orchestrator | 2026-04-16 08:29:21.308112 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:29:21.308125 | orchestrator | Thursday 16 April 2026 08:29:17 +0000 (0:00:01.370) 0:43:23.883 ******** 2026-04-16 08:29:21.308137 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:21.308150 | orchestrator | 2026-04-16 08:29:21.308161 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:29:21.308173 | orchestrator | Thursday 16 April 2026 08:29:18 +0000 (0:00:01.503) 0:43:25.387 ******** 2026-04-16 08:29:21.308184 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:21.308196 | orchestrator | 2026-04-16 08:29:21.308206 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:29:21.308218 | orchestrator | Thursday 16 April 2026 08:29:19 +0000 (0:00:01.123) 0:43:26.510 ******** 2026-04-16 08:29:21.308229 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:29:21.308240 | orchestrator | 2026-04-16 08:29:21.308276 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:29:21.308295 | orchestrator | Thursday 16 April 2026 08:29:21 +0000 (0:00:01.545) 0:43:28.056 ******** 2026-04-16 08:30:01.832097 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832221 | orchestrator | 2026-04-16 08:30:01.832238 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:30:01.832252 | orchestrator | Thursday 16 April 2026 08:29:22 +0000 (0:00:01.093) 0:43:29.150 ******** 2026-04-16 08:30:01.832264 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832275 | orchestrator | 2026-04-16 08:30:01.832286 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:30:01.832384 | orchestrator | Thursday 16 April 2026 08:29:23 +0000 (0:00:01.199) 0:43:30.349 ******** 2026-04-16 08:30:01.832398 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832410 | orchestrator | 2026-04-16 08:30:01.832422 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:30:01.832449 | orchestrator | Thursday 16 April 2026 08:29:24 +0000 (0:00:01.132) 0:43:31.482 ******** 2026-04-16 08:30:01.832462 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-16 08:30:01.832474 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-16 08:30:01.832485 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-16 08:30:01.832496 | orchestrator | 2026-04-16 08:30:01.832508 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:30:01.832520 | orchestrator | Thursday 16 April 2026 08:29:26 +0000 (0:00:01.983) 0:43:33.465 ******** 2026-04-16 08:30:01.832531 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 08:30:01.832543 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 08:30:01.832579 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 08:30:01.832591 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832602 | orchestrator | 2026-04-16 08:30:01.832613 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:30:01.832624 | orchestrator | Thursday 16 April 2026 08:29:27 +0000 (0:00:01.195) 0:43:34.661 ******** 2026-04-16 08:30:01.832635 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-16 08:30:01.832646 | orchestrator | 2026-04-16 08:30:01.832658 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:30:01.832670 | orchestrator | Thursday 16 April 2026 08:29:29 +0000 (0:00:01.129) 0:43:35.791 ******** 2026-04-16 08:30:01.832681 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832692 | orchestrator | 2026-04-16 08:30:01.832702 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:30:01.832713 | orchestrator | Thursday 16 April 2026 08:29:30 +0000 (0:00:01.119) 0:43:36.910 ******** 2026-04-16 08:30:01.832724 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832735 | orchestrator | 2026-04-16 08:30:01.832746 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:30:01.832757 | orchestrator | Thursday 16 April 2026 08:29:31 +0000 (0:00:01.133) 0:43:38.043 ******** 2026-04-16 08:30:01.832767 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832778 | orchestrator | 2026-04-16 08:30:01.832789 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:30:01.832800 | orchestrator | Thursday 16 April 2026 08:29:32 +0000 (0:00:01.117) 0:43:39.161 ******** 2026-04-16 08:30:01.832811 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.832822 | orchestrator | 2026-04-16 08:30:01.832833 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:30:01.832844 | orchestrator | Thursday 16 April 2026 08:29:33 +0000 (0:00:01.227) 0:43:40.388 ******** 2026-04-16 08:30:01.832855 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:30:01.832866 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:30:01.832877 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:30:01.832888 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832899 | orchestrator | 2026-04-16 08:30:01.832910 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:30:01.832921 | orchestrator | Thursday 16 April 2026 08:29:34 +0000 (0:00:01.366) 0:43:41.754 ******** 2026-04-16 08:30:01.832932 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:30:01.832943 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:30:01.832953 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:30:01.832964 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.832975 | orchestrator | 2026-04-16 08:30:01.832986 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:30:01.832997 | orchestrator | Thursday 16 April 2026 08:29:36 +0000 (0:00:01.411) 0:43:43.166 ******** 2026-04-16 08:30:01.833008 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:30:01.833018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:30:01.833029 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:30:01.833040 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.833051 | orchestrator | 2026-04-16 08:30:01.833062 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:30:01.833073 | orchestrator | Thursday 16 April 2026 08:29:37 +0000 (0:00:01.381) 0:43:44.548 ******** 2026-04-16 08:30:01.833084 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833095 | orchestrator | 2026-04-16 08:30:01.833106 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:30:01.833125 | orchestrator | Thursday 16 April 2026 08:29:38 +0000 (0:00:01.193) 0:43:45.741 ******** 2026-04-16 08:30:01.833137 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 08:30:01.833147 | orchestrator | 2026-04-16 08:30:01.833159 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:30:01.833170 | orchestrator | Thursday 16 April 2026 08:29:40 +0000 (0:00:01.721) 0:43:47.463 ******** 2026-04-16 08:30:01.833201 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:30:01.833213 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:30:01.833224 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:30:01.833235 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:30:01.833246 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:30:01.833257 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-16 08:30:01.833268 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:30:01.833279 | orchestrator | 2026-04-16 08:30:01.833322 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:30:01.833336 | orchestrator | Thursday 16 April 2026 08:29:42 +0000 (0:00:02.058) 0:43:49.522 ******** 2026-04-16 08:30:01.833347 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:30:01.833357 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:30:01.833368 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:30:01.833379 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:30:01.833390 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:30:01.833400 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-16 08:30:01.833411 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:30:01.833422 | orchestrator | 2026-04-16 08:30:01.833433 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-04-16 08:30:01.833451 | orchestrator | Thursday 16 April 2026 08:29:44 +0000 (0:00:02.154) 0:43:51.676 ******** 2026-04-16 08:30:01.833469 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833487 | orchestrator | 2026-04-16 08:30:01.833505 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-04-16 08:30:01.833523 | orchestrator | Thursday 16 April 2026 08:29:46 +0000 (0:00:01.123) 0:43:52.800 ******** 2026-04-16 08:30:01.833541 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833559 | orchestrator | 2026-04-16 08:30:01.833574 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-04-16 08:30:01.833585 | orchestrator | Thursday 16 April 2026 08:29:46 +0000 (0:00:00.766) 0:43:53.567 ******** 2026-04-16 08:30:01.833596 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833607 | orchestrator | 2026-04-16 08:30:01.833618 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-04-16 08:30:01.833629 | orchestrator | Thursday 16 April 2026 08:29:47 +0000 (0:00:00.868) 0:43:54.435 ******** 2026-04-16 08:30:01.833640 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-16 08:30:01.833651 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-16 08:30:01.833662 | orchestrator | 2026-04-16 08:30:01.833672 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:30:01.833683 | orchestrator | Thursday 16 April 2026 08:29:51 +0000 (0:00:03.873) 0:43:58.309 ******** 2026-04-16 08:30:01.833694 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-16 08:30:01.833705 | orchestrator | 2026-04-16 08:30:01.833716 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:30:01.833735 | orchestrator | Thursday 16 April 2026 08:29:52 +0000 (0:00:01.092) 0:43:59.402 ******** 2026-04-16 08:30:01.833746 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-16 08:30:01.833757 | orchestrator | 2026-04-16 08:30:01.833768 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:30:01.833779 | orchestrator | Thursday 16 April 2026 08:29:53 +0000 (0:00:01.108) 0:44:00.511 ******** 2026-04-16 08:30:01.833790 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.833801 | orchestrator | 2026-04-16 08:30:01.833812 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:30:01.833823 | orchestrator | Thursday 16 April 2026 08:29:54 +0000 (0:00:01.108) 0:44:01.620 ******** 2026-04-16 08:30:01.833834 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833844 | orchestrator | 2026-04-16 08:30:01.833855 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:30:01.833866 | orchestrator | Thursday 16 April 2026 08:29:56 +0000 (0:00:01.515) 0:44:03.136 ******** 2026-04-16 08:30:01.833878 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833896 | orchestrator | 2026-04-16 08:30:01.833924 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:30:01.833943 | orchestrator | Thursday 16 April 2026 08:29:57 +0000 (0:00:01.510) 0:44:04.646 ******** 2026-04-16 08:30:01.833962 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:01.833979 | orchestrator | 2026-04-16 08:30:01.833997 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:30:01.834013 | orchestrator | Thursday 16 April 2026 08:29:59 +0000 (0:00:01.546) 0:44:06.193 ******** 2026-04-16 08:30:01.834116 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.834136 | orchestrator | 2026-04-16 08:30:01.834151 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:30:01.834163 | orchestrator | Thursday 16 April 2026 08:30:00 +0000 (0:00:01.136) 0:44:07.330 ******** 2026-04-16 08:30:01.834174 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.834185 | orchestrator | 2026-04-16 08:30:01.834195 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:30:01.834206 | orchestrator | Thursday 16 April 2026 08:30:01 +0000 (0:00:01.103) 0:44:08.433 ******** 2026-04-16 08:30:01.834217 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:01.834228 | orchestrator | 2026-04-16 08:30:01.834251 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:30:40.802993 | orchestrator | Thursday 16 April 2026 08:30:02 +0000 (0:00:01.116) 0:44:09.549 ******** 2026-04-16 08:30:40.803145 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.803179 | orchestrator | 2026-04-16 08:30:40.803200 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:30:40.803218 | orchestrator | Thursday 16 April 2026 08:30:04 +0000 (0:00:01.525) 0:44:11.075 ******** 2026-04-16 08:30:40.803237 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.803257 | orchestrator | 2026-04-16 08:30:40.803277 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:30:40.803298 | orchestrator | Thursday 16 April 2026 08:30:05 +0000 (0:00:01.528) 0:44:12.604 ******** 2026-04-16 08:30:40.803336 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.803455 | orchestrator | 2026-04-16 08:30:40.803475 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:30:40.803494 | orchestrator | Thursday 16 April 2026 08:30:06 +0000 (0:00:00.783) 0:44:13.388 ******** 2026-04-16 08:30:40.803513 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.803534 | orchestrator | 2026-04-16 08:30:40.803555 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:30:40.803575 | orchestrator | Thursday 16 April 2026 08:30:07 +0000 (0:00:00.761) 0:44:14.150 ******** 2026-04-16 08:30:40.803594 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.803612 | orchestrator | 2026-04-16 08:30:40.803632 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:30:40.803686 | orchestrator | Thursday 16 April 2026 08:30:08 +0000 (0:00:00.816) 0:44:14.966 ******** 2026-04-16 08:30:40.803708 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.803729 | orchestrator | 2026-04-16 08:30:40.803749 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:30:40.803770 | orchestrator | Thursday 16 April 2026 08:30:08 +0000 (0:00:00.768) 0:44:15.734 ******** 2026-04-16 08:30:40.803790 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.803809 | orchestrator | 2026-04-16 08:30:40.803830 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:30:40.803850 | orchestrator | Thursday 16 April 2026 08:30:09 +0000 (0:00:00.758) 0:44:16.493 ******** 2026-04-16 08:30:40.803867 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.803880 | orchestrator | 2026-04-16 08:30:40.803891 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:30:40.803902 | orchestrator | Thursday 16 April 2026 08:30:10 +0000 (0:00:00.811) 0:44:17.305 ******** 2026-04-16 08:30:40.803913 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.803924 | orchestrator | 2026-04-16 08:30:40.803934 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:30:40.803945 | orchestrator | Thursday 16 April 2026 08:30:11 +0000 (0:00:00.834) 0:44:18.140 ******** 2026-04-16 08:30:40.803956 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.803967 | orchestrator | 2026-04-16 08:30:40.803978 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:30:40.803989 | orchestrator | Thursday 16 April 2026 08:30:12 +0000 (0:00:00.816) 0:44:18.957 ******** 2026-04-16 08:30:40.804000 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.804010 | orchestrator | 2026-04-16 08:30:40.804021 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:30:40.804032 | orchestrator | Thursday 16 April 2026 08:30:12 +0000 (0:00:00.780) 0:44:19.738 ******** 2026-04-16 08:30:40.804042 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.804053 | orchestrator | 2026-04-16 08:30:40.804064 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:30:40.804074 | orchestrator | Thursday 16 April 2026 08:30:13 +0000 (0:00:00.772) 0:44:20.510 ******** 2026-04-16 08:30:40.804085 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804095 | orchestrator | 2026-04-16 08:30:40.804106 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:30:40.804117 | orchestrator | Thursday 16 April 2026 08:30:14 +0000 (0:00:00.781) 0:44:21.291 ******** 2026-04-16 08:30:40.804127 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804138 | orchestrator | 2026-04-16 08:30:40.804148 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:30:40.804159 | orchestrator | Thursday 16 April 2026 08:30:15 +0000 (0:00:00.785) 0:44:22.077 ******** 2026-04-16 08:30:40.804170 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804180 | orchestrator | 2026-04-16 08:30:40.804191 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:30:40.804202 | orchestrator | Thursday 16 April 2026 08:30:16 +0000 (0:00:00.764) 0:44:22.841 ******** 2026-04-16 08:30:40.804212 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804223 | orchestrator | 2026-04-16 08:30:40.804234 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:30:40.804244 | orchestrator | Thursday 16 April 2026 08:30:16 +0000 (0:00:00.761) 0:44:23.603 ******** 2026-04-16 08:30:40.804255 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804266 | orchestrator | 2026-04-16 08:30:40.804276 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:30:40.804287 | orchestrator | Thursday 16 April 2026 08:30:17 +0000 (0:00:00.777) 0:44:24.381 ******** 2026-04-16 08:30:40.804298 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804309 | orchestrator | 2026-04-16 08:30:40.804319 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:30:40.804371 | orchestrator | Thursday 16 April 2026 08:30:18 +0000 (0:00:00.770) 0:44:25.151 ******** 2026-04-16 08:30:40.804391 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804405 | orchestrator | 2026-04-16 08:30:40.804416 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:30:40.804427 | orchestrator | Thursday 16 April 2026 08:30:19 +0000 (0:00:00.733) 0:44:25.885 ******** 2026-04-16 08:30:40.804438 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804448 | orchestrator | 2026-04-16 08:30:40.804459 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:30:40.804470 | orchestrator | Thursday 16 April 2026 08:30:19 +0000 (0:00:00.755) 0:44:26.640 ******** 2026-04-16 08:30:40.804505 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804517 | orchestrator | 2026-04-16 08:30:40.804528 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:30:40.804538 | orchestrator | Thursday 16 April 2026 08:30:20 +0000 (0:00:00.762) 0:44:27.402 ******** 2026-04-16 08:30:40.804549 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804560 | orchestrator | 2026-04-16 08:30:40.804571 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:30:40.804582 | orchestrator | Thursday 16 April 2026 08:30:21 +0000 (0:00:00.763) 0:44:28.166 ******** 2026-04-16 08:30:40.804592 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804604 | orchestrator | 2026-04-16 08:30:40.804615 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:30:40.804634 | orchestrator | Thursday 16 April 2026 08:30:22 +0000 (0:00:00.757) 0:44:28.924 ******** 2026-04-16 08:30:40.804645 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804656 | orchestrator | 2026-04-16 08:30:40.804667 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:30:40.804678 | orchestrator | Thursday 16 April 2026 08:30:22 +0000 (0:00:00.742) 0:44:29.666 ******** 2026-04-16 08:30:40.804689 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.804699 | orchestrator | 2026-04-16 08:30:40.804710 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:30:40.804721 | orchestrator | Thursday 16 April 2026 08:30:24 +0000 (0:00:01.581) 0:44:31.248 ******** 2026-04-16 08:30:40.804732 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.804743 | orchestrator | 2026-04-16 08:30:40.804754 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:30:40.804765 | orchestrator | Thursday 16 April 2026 08:30:26 +0000 (0:00:01.912) 0:44:33.161 ******** 2026-04-16 08:30:40.804776 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-16 08:30:40.804787 | orchestrator | 2026-04-16 08:30:40.804798 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:30:40.804812 | orchestrator | Thursday 16 April 2026 08:30:27 +0000 (0:00:01.138) 0:44:34.300 ******** 2026-04-16 08:30:40.804830 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804860 | orchestrator | 2026-04-16 08:30:40.804878 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:30:40.804896 | orchestrator | Thursday 16 April 2026 08:30:28 +0000 (0:00:01.099) 0:44:35.399 ******** 2026-04-16 08:30:40.804913 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.804930 | orchestrator | 2026-04-16 08:30:40.804945 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:30:40.804961 | orchestrator | Thursday 16 April 2026 08:30:29 +0000 (0:00:01.094) 0:44:36.494 ******** 2026-04-16 08:30:40.804978 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:30:40.804997 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:30:40.805014 | orchestrator | 2026-04-16 08:30:40.805033 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:30:40.805085 | orchestrator | Thursday 16 April 2026 08:30:31 +0000 (0:00:01.764) 0:44:38.259 ******** 2026-04-16 08:30:40.805110 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.805121 | orchestrator | 2026-04-16 08:30:40.805132 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:30:40.805144 | orchestrator | Thursday 16 April 2026 08:30:32 +0000 (0:00:01.445) 0:44:39.704 ******** 2026-04-16 08:30:40.805154 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.805165 | orchestrator | 2026-04-16 08:30:40.805176 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:30:40.805187 | orchestrator | Thursday 16 April 2026 08:30:34 +0000 (0:00:01.134) 0:44:40.839 ******** 2026-04-16 08:30:40.805198 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.805208 | orchestrator | 2026-04-16 08:30:40.805219 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:30:40.805230 | orchestrator | Thursday 16 April 2026 08:30:34 +0000 (0:00:00.769) 0:44:41.608 ******** 2026-04-16 08:30:40.805241 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.805252 | orchestrator | 2026-04-16 08:30:40.805263 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:30:40.805273 | orchestrator | Thursday 16 April 2026 08:30:35 +0000 (0:00:00.781) 0:44:42.390 ******** 2026-04-16 08:30:40.805284 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-16 08:30:40.805295 | orchestrator | 2026-04-16 08:30:40.805306 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:30:40.805317 | orchestrator | Thursday 16 April 2026 08:30:36 +0000 (0:00:01.104) 0:44:43.494 ******** 2026-04-16 08:30:40.805327 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:30:40.805358 | orchestrator | 2026-04-16 08:30:40.805369 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:30:40.805380 | orchestrator | Thursday 16 April 2026 08:30:38 +0000 (0:00:01.683) 0:44:45.178 ******** 2026-04-16 08:30:40.805391 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:30:40.805402 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:30:40.805413 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:30:40.805423 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.805434 | orchestrator | 2026-04-16 08:30:40.805445 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:30:40.805455 | orchestrator | Thursday 16 April 2026 08:30:39 +0000 (0:00:01.094) 0:44:46.273 ******** 2026-04-16 08:30:40.805466 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:30:40.805477 | orchestrator | 2026-04-16 08:30:40.805488 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:30:40.805498 | orchestrator | Thursday 16 April 2026 08:30:40 +0000 (0:00:01.198) 0:44:47.472 ******** 2026-04-16 08:30:40.805521 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.374498 | orchestrator | 2026-04-16 08:31:24.374651 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:31:24.374680 | orchestrator | Thursday 16 April 2026 08:30:41 +0000 (0:00:01.158) 0:44:48.630 ******** 2026-04-16 08:31:24.374692 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.374703 | orchestrator | 2026-04-16 08:31:24.374714 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:31:24.374724 | orchestrator | Thursday 16 April 2026 08:30:43 +0000 (0:00:01.166) 0:44:49.797 ******** 2026-04-16 08:31:24.374734 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.374743 | orchestrator | 2026-04-16 08:31:24.374769 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:31:24.374779 | orchestrator | Thursday 16 April 2026 08:30:44 +0000 (0:00:01.143) 0:44:50.940 ******** 2026-04-16 08:31:24.374789 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.374821 | orchestrator | 2026-04-16 08:31:24.374831 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:31:24.374841 | orchestrator | Thursday 16 April 2026 08:30:44 +0000 (0:00:00.775) 0:44:51.716 ******** 2026-04-16 08:31:24.374851 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:31:24.374862 | orchestrator | 2026-04-16 08:31:24.374872 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:31:24.374882 | orchestrator | Thursday 16 April 2026 08:30:47 +0000 (0:00:02.273) 0:44:53.990 ******** 2026-04-16 08:31:24.374892 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:31:24.374901 | orchestrator | 2026-04-16 08:31:24.374911 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:31:24.374921 | orchestrator | Thursday 16 April 2026 08:30:48 +0000 (0:00:00.787) 0:44:54.777 ******** 2026-04-16 08:31:24.374931 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-16 08:31:24.374940 | orchestrator | 2026-04-16 08:31:24.374951 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:31:24.374962 | orchestrator | Thursday 16 April 2026 08:30:49 +0000 (0:00:01.256) 0:44:56.034 ******** 2026-04-16 08:31:24.374973 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.374984 | orchestrator | 2026-04-16 08:31:24.374995 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:31:24.375012 | orchestrator | Thursday 16 April 2026 08:30:50 +0000 (0:00:01.165) 0:44:57.200 ******** 2026-04-16 08:31:24.375026 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375041 | orchestrator | 2026-04-16 08:31:24.375056 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:31:24.375072 | orchestrator | Thursday 16 April 2026 08:30:51 +0000 (0:00:01.132) 0:44:58.333 ******** 2026-04-16 08:31:24.375088 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375104 | orchestrator | 2026-04-16 08:31:24.375120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:31:24.375135 | orchestrator | Thursday 16 April 2026 08:30:52 +0000 (0:00:01.098) 0:44:59.431 ******** 2026-04-16 08:31:24.375152 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375168 | orchestrator | 2026-04-16 08:31:24.375184 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:31:24.375199 | orchestrator | Thursday 16 April 2026 08:30:53 +0000 (0:00:01.115) 0:45:00.547 ******** 2026-04-16 08:31:24.375215 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375231 | orchestrator | 2026-04-16 08:31:24.375248 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:31:24.375265 | orchestrator | Thursday 16 April 2026 08:30:54 +0000 (0:00:01.123) 0:45:01.671 ******** 2026-04-16 08:31:24.375282 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375298 | orchestrator | 2026-04-16 08:31:24.375313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:31:24.375324 | orchestrator | Thursday 16 April 2026 08:30:56 +0000 (0:00:01.165) 0:45:02.837 ******** 2026-04-16 08:31:24.375334 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375343 | orchestrator | 2026-04-16 08:31:24.375352 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:31:24.375362 | orchestrator | Thursday 16 April 2026 08:30:57 +0000 (0:00:01.131) 0:45:03.968 ******** 2026-04-16 08:31:24.375372 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375381 | orchestrator | 2026-04-16 08:31:24.375429 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:31:24.375440 | orchestrator | Thursday 16 April 2026 08:30:58 +0000 (0:00:01.173) 0:45:05.141 ******** 2026-04-16 08:31:24.375450 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:31:24.375459 | orchestrator | 2026-04-16 08:31:24.375469 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:31:24.375479 | orchestrator | Thursday 16 April 2026 08:30:59 +0000 (0:00:00.798) 0:45:05.940 ******** 2026-04-16 08:31:24.375501 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-16 08:31:24.375512 | orchestrator | 2026-04-16 08:31:24.375522 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:31:24.375531 | orchestrator | Thursday 16 April 2026 08:31:00 +0000 (0:00:01.086) 0:45:07.026 ******** 2026-04-16 08:31:24.375541 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-16 08:31:24.375552 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-16 08:31:24.375561 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-16 08:31:24.375571 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-16 08:31:24.375581 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-16 08:31:24.375590 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-16 08:31:24.375600 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-16 08:31:24.375610 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:31:24.375620 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:31:24.375649 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:31:24.375660 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:31:24.375670 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:31:24.375679 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:31:24.375689 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:31:24.375699 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-16 08:31:24.375708 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-16 08:31:24.375718 | orchestrator | 2026-04-16 08:31:24.375735 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:31:24.375745 | orchestrator | Thursday 16 April 2026 08:31:07 +0000 (0:00:06.996) 0:45:14.023 ******** 2026-04-16 08:31:24.375755 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-16 08:31:24.375765 | orchestrator | 2026-04-16 08:31:24.375774 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:31:24.375786 | orchestrator | Thursday 16 April 2026 08:31:08 +0000 (0:00:01.104) 0:45:15.127 ******** 2026-04-16 08:31:24.375802 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:31:24.375827 | orchestrator | 2026-04-16 08:31:24.375844 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:31:24.375861 | orchestrator | Thursday 16 April 2026 08:31:09 +0000 (0:00:01.464) 0:45:16.592 ******** 2026-04-16 08:31:24.375877 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:31:24.375891 | orchestrator | 2026-04-16 08:31:24.375906 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:31:24.375922 | orchestrator | Thursday 16 April 2026 08:31:11 +0000 (0:00:01.608) 0:45:18.201 ******** 2026-04-16 08:31:24.375937 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.375953 | orchestrator | 2026-04-16 08:31:24.375969 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:31:24.375986 | orchestrator | Thursday 16 April 2026 08:31:12 +0000 (0:00:00.737) 0:45:18.938 ******** 2026-04-16 08:31:24.376002 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376018 | orchestrator | 2026-04-16 08:31:24.376034 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:31:24.376050 | orchestrator | Thursday 16 April 2026 08:31:12 +0000 (0:00:00.778) 0:45:19.717 ******** 2026-04-16 08:31:24.376068 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376083 | orchestrator | 2026-04-16 08:31:24.376100 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:31:24.376123 | orchestrator | Thursday 16 April 2026 08:31:13 +0000 (0:00:00.770) 0:45:20.487 ******** 2026-04-16 08:31:24.376133 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376143 | orchestrator | 2026-04-16 08:31:24.376152 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:31:24.376162 | orchestrator | Thursday 16 April 2026 08:31:14 +0000 (0:00:00.777) 0:45:21.265 ******** 2026-04-16 08:31:24.376171 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376181 | orchestrator | 2026-04-16 08:31:24.376191 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:31:24.376200 | orchestrator | Thursday 16 April 2026 08:31:15 +0000 (0:00:00.751) 0:45:22.017 ******** 2026-04-16 08:31:24.376210 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376219 | orchestrator | 2026-04-16 08:31:24.376229 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:31:24.376239 | orchestrator | Thursday 16 April 2026 08:31:16 +0000 (0:00:00.760) 0:45:22.778 ******** 2026-04-16 08:31:24.376248 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376258 | orchestrator | 2026-04-16 08:31:24.376267 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:31:24.376277 | orchestrator | Thursday 16 April 2026 08:31:16 +0000 (0:00:00.760) 0:45:23.538 ******** 2026-04-16 08:31:24.376287 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376296 | orchestrator | 2026-04-16 08:31:24.376306 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:31:24.376316 | orchestrator | Thursday 16 April 2026 08:31:17 +0000 (0:00:00.763) 0:45:24.302 ******** 2026-04-16 08:31:24.376325 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376335 | orchestrator | 2026-04-16 08:31:24.376345 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:31:24.376354 | orchestrator | Thursday 16 April 2026 08:31:18 +0000 (0:00:00.783) 0:45:25.086 ******** 2026-04-16 08:31:24.376364 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:31:24.376373 | orchestrator | 2026-04-16 08:31:24.376383 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:31:24.376422 | orchestrator | Thursday 16 April 2026 08:31:19 +0000 (0:00:00.760) 0:45:25.847 ******** 2026-04-16 08:31:24.376432 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:31:24.376442 | orchestrator | 2026-04-16 08:31:24.376451 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:31:24.376461 | orchestrator | Thursday 16 April 2026 08:31:19 +0000 (0:00:00.837) 0:45:26.685 ******** 2026-04-16 08:31:24.376470 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:31:24.376480 | orchestrator | 2026-04-16 08:31:24.376489 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:31:24.376499 | orchestrator | Thursday 16 April 2026 08:31:24 +0000 (0:00:04.332) 0:45:31.018 ******** 2026-04-16 08:31:24.376520 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:32:05.879624 | orchestrator | 2026-04-16 08:32:05.879740 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:32:05.879757 | orchestrator | Thursday 16 April 2026 08:31:25 +0000 (0:00:00.834) 0:45:31.852 ******** 2026-04-16 08:32:05.879787 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-16 08:32:05.879803 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-16 08:32:05.879841 | orchestrator | 2026-04-16 08:32:05.879854 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:32:05.879865 | orchestrator | Thursday 16 April 2026 08:31:32 +0000 (0:00:07.545) 0:45:39.398 ******** 2026-04-16 08:32:05.879876 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.879888 | orchestrator | 2026-04-16 08:32:05.879899 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:32:05.879910 | orchestrator | Thursday 16 April 2026 08:31:33 +0000 (0:00:00.800) 0:45:40.198 ******** 2026-04-16 08:32:05.879921 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.879932 | orchestrator | 2026-04-16 08:32:05.879943 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:32:05.879955 | orchestrator | Thursday 16 April 2026 08:31:34 +0000 (0:00:00.765) 0:45:40.964 ******** 2026-04-16 08:32:05.879966 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.879976 | orchestrator | 2026-04-16 08:32:05.879987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:32:05.879998 | orchestrator | Thursday 16 April 2026 08:31:34 +0000 (0:00:00.771) 0:45:41.735 ******** 2026-04-16 08:32:05.880009 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880020 | orchestrator | 2026-04-16 08:32:05.880030 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:32:05.880041 | orchestrator | Thursday 16 April 2026 08:31:35 +0000 (0:00:00.806) 0:45:42.541 ******** 2026-04-16 08:32:05.880052 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880062 | orchestrator | 2026-04-16 08:32:05.880073 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:32:05.880086 | orchestrator | Thursday 16 April 2026 08:31:36 +0000 (0:00:00.778) 0:45:43.320 ******** 2026-04-16 08:32:05.880099 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.880112 | orchestrator | 2026-04-16 08:32:05.880125 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:32:05.880142 | orchestrator | Thursday 16 April 2026 08:31:37 +0000 (0:00:00.863) 0:45:44.183 ******** 2026-04-16 08:32:05.880162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:32:05.880182 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:32:05.880202 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:32:05.880223 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880243 | orchestrator | 2026-04-16 08:32:05.880263 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:32:05.880284 | orchestrator | Thursday 16 April 2026 08:31:38 +0000 (0:00:01.360) 0:45:45.544 ******** 2026-04-16 08:32:05.880305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:32:05.880326 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:32:05.880348 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:32:05.880367 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880386 | orchestrator | 2026-04-16 08:32:05.880406 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:32:05.880464 | orchestrator | Thursday 16 April 2026 08:31:40 +0000 (0:00:01.386) 0:45:46.930 ******** 2026-04-16 08:32:05.880487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:32:05.880508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:32:05.880527 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:32:05.880545 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880556 | orchestrator | 2026-04-16 08:32:05.880567 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:32:05.880590 | orchestrator | Thursday 16 April 2026 08:31:41 +0000 (0:00:01.103) 0:45:48.034 ******** 2026-04-16 08:32:05.880601 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.880612 | orchestrator | 2026-04-16 08:32:05.880623 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:32:05.880633 | orchestrator | Thursday 16 April 2026 08:31:42 +0000 (0:00:00.805) 0:45:48.840 ******** 2026-04-16 08:32:05.880644 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 08:32:05.880655 | orchestrator | 2026-04-16 08:32:05.880666 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:32:05.880676 | orchestrator | Thursday 16 April 2026 08:31:43 +0000 (0:00:01.062) 0:45:49.902 ******** 2026-04-16 08:32:05.880687 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.880698 | orchestrator | 2026-04-16 08:32:05.880708 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-04-16 08:32:05.880719 | orchestrator | Thursday 16 April 2026 08:31:44 +0000 (0:00:01.373) 0:45:51.276 ******** 2026-04-16 08:32:05.880730 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.880741 | orchestrator | 2026-04-16 08:32:05.880772 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-04-16 08:32:05.880784 | orchestrator | Thursday 16 April 2026 08:31:45 +0000 (0:00:00.873) 0:45:52.149 ******** 2026-04-16 08:32:05.880796 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:32:05.880807 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:32:05.880818 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:32:05.880829 | orchestrator | 2026-04-16 08:32:05.880854 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-04-16 08:32:05.880870 | orchestrator | Thursday 16 April 2026 08:31:46 +0000 (0:00:01.589) 0:45:53.738 ******** 2026-04-16 08:32:05.880881 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-04-16 08:32:05.880892 | orchestrator | 2026-04-16 08:32:05.880902 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-04-16 08:32:05.880913 | orchestrator | Thursday 16 April 2026 08:31:48 +0000 (0:00:01.106) 0:45:54.845 ******** 2026-04-16 08:32:05.880924 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880934 | orchestrator | 2026-04-16 08:32:05.880945 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-04-16 08:32:05.880956 | orchestrator | Thursday 16 April 2026 08:31:49 +0000 (0:00:01.079) 0:45:55.924 ******** 2026-04-16 08:32:05.880967 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.880977 | orchestrator | 2026-04-16 08:32:05.880988 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-04-16 08:32:05.880999 | orchestrator | Thursday 16 April 2026 08:31:50 +0000 (0:00:01.101) 0:45:57.026 ******** 2026-04-16 08:32:05.881010 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.881020 | orchestrator | 2026-04-16 08:32:05.881031 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-04-16 08:32:05.881042 | orchestrator | Thursday 16 April 2026 08:31:51 +0000 (0:00:01.403) 0:45:58.429 ******** 2026-04-16 08:32:05.881053 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.881064 | orchestrator | 2026-04-16 08:32:05.881075 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-04-16 08:32:05.881085 | orchestrator | Thursday 16 April 2026 08:31:52 +0000 (0:00:01.138) 0:45:59.568 ******** 2026-04-16 08:32:05.881096 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-04-16 08:32:05.881107 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-04-16 08:32:05.881118 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-04-16 08:32:05.881129 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-04-16 08:32:05.881140 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-04-16 08:32:05.881159 | orchestrator | 2026-04-16 08:32:05.881169 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-04-16 08:32:05.881180 | orchestrator | Thursday 16 April 2026 08:31:55 +0000 (0:00:02.569) 0:46:02.138 ******** 2026-04-16 08:32:05.881191 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.881202 | orchestrator | 2026-04-16 08:32:05.881213 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-04-16 08:32:05.881223 | orchestrator | Thursday 16 April 2026 08:31:56 +0000 (0:00:00.752) 0:46:02.890 ******** 2026-04-16 08:32:05.881234 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-04-16 08:32:05.881245 | orchestrator | 2026-04-16 08:32:05.881255 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-04-16 08:32:05.881266 | orchestrator | Thursday 16 April 2026 08:31:57 +0000 (0:00:01.095) 0:46:03.986 ******** 2026-04-16 08:32:05.881277 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-04-16 08:32:05.881288 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-04-16 08:32:05.881299 | orchestrator | 2026-04-16 08:32:05.881309 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-04-16 08:32:05.881320 | orchestrator | Thursday 16 April 2026 08:31:59 +0000 (0:00:01.907) 0:46:05.893 ******** 2026-04-16 08:32:05.881331 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:32:05.881341 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 08:32:05.881352 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:32:05.881363 | orchestrator | 2026-04-16 08:32:05.881374 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:32:05.881385 | orchestrator | Thursday 16 April 2026 08:32:02 +0000 (0:00:03.287) 0:46:09.181 ******** 2026-04-16 08:32:05.881395 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 08:32:05.881406 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 08:32:05.881417 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:32:05.881455 | orchestrator | 2026-04-16 08:32:05.881467 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-04-16 08:32:05.881478 | orchestrator | Thursday 16 April 2026 08:32:04 +0000 (0:00:01.659) 0:46:10.841 ******** 2026-04-16 08:32:05.881489 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.881500 | orchestrator | 2026-04-16 08:32:05.881510 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-04-16 08:32:05.881521 | orchestrator | Thursday 16 April 2026 08:32:04 +0000 (0:00:00.853) 0:46:11.694 ******** 2026-04-16 08:32:05.881532 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.881542 | orchestrator | 2026-04-16 08:32:05.881553 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-04-16 08:32:05.881564 | orchestrator | Thursday 16 April 2026 08:32:05 +0000 (0:00:00.789) 0:46:12.484 ******** 2026-04-16 08:32:05.881574 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:32:05.881585 | orchestrator | 2026-04-16 08:32:05.881603 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-04-16 08:34:27.602392 | orchestrator | Thursday 16 April 2026 08:32:06 +0000 (0:00:00.761) 0:46:13.245 ******** 2026-04-16 08:34:27.602512 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-04-16 08:34:27.602533 | orchestrator | 2026-04-16 08:34:27.602547 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-04-16 08:34:27.602671 | orchestrator | Thursday 16 April 2026 08:32:07 +0000 (0:00:01.164) 0:46:14.410 ******** 2026-04-16 08:34:27.602684 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.602696 | orchestrator | 2026-04-16 08:34:27.602725 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-04-16 08:34:27.602736 | orchestrator | Thursday 16 April 2026 08:32:09 +0000 (0:00:01.463) 0:46:15.873 ******** 2026-04-16 08:34:27.602774 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.602785 | orchestrator | 2026-04-16 08:34:27.602796 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-04-16 08:34:27.602807 | orchestrator | Thursday 16 April 2026 08:32:12 +0000 (0:00:03.559) 0:46:19.433 ******** 2026-04-16 08:34:27.602818 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-04-16 08:34:27.602829 | orchestrator | 2026-04-16 08:34:27.602840 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-04-16 08:34:27.602851 | orchestrator | Thursday 16 April 2026 08:32:13 +0000 (0:00:01.101) 0:46:20.535 ******** 2026-04-16 08:34:27.602862 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.602873 | orchestrator | 2026-04-16 08:34:27.602884 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-04-16 08:34:27.602895 | orchestrator | Thursday 16 April 2026 08:32:15 +0000 (0:00:01.975) 0:46:22.510 ******** 2026-04-16 08:34:27.602905 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.602916 | orchestrator | 2026-04-16 08:34:27.602927 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-04-16 08:34:27.602937 | orchestrator | Thursday 16 April 2026 08:32:17 +0000 (0:00:01.890) 0:46:24.401 ******** 2026-04-16 08:34:27.602948 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.602959 | orchestrator | 2026-04-16 08:34:27.602969 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-04-16 08:34:27.602980 | orchestrator | Thursday 16 April 2026 08:32:19 +0000 (0:00:02.258) 0:46:26.660 ******** 2026-04-16 08:34:27.602991 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:34:27.603002 | orchestrator | 2026-04-16 08:34:27.603013 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-04-16 08:34:27.603024 | orchestrator | Thursday 16 April 2026 08:32:21 +0000 (0:00:01.110) 0:46:27.771 ******** 2026-04-16 08:34:27.603035 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:34:27.603046 | orchestrator | 2026-04-16 08:34:27.603056 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-04-16 08:34:27.603067 | orchestrator | Thursday 16 April 2026 08:32:22 +0000 (0:00:01.115) 0:46:28.887 ******** 2026-04-16 08:34:27.603078 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-16 08:34:27.603089 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-16 08:34:27.603099 | orchestrator | 2026-04-16 08:34:27.603110 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-04-16 08:34:27.603121 | orchestrator | Thursday 16 April 2026 08:32:24 +0000 (0:00:01.874) 0:46:30.762 ******** 2026-04-16 08:34:27.603132 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-04-16 08:34:27.603143 | orchestrator | ok: [testbed-node-5] => (item=1) 2026-04-16 08:34:27.603153 | orchestrator | 2026-04-16 08:34:27.603164 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-04-16 08:34:27.603175 | orchestrator | Thursday 16 April 2026 08:32:26 +0000 (0:00:02.932) 0:46:33.694 ******** 2026-04-16 08:34:27.603185 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-04-16 08:34:27.603196 | orchestrator | changed: [testbed-node-5] => (item=1) 2026-04-16 08:34:27.603207 | orchestrator | 2026-04-16 08:34:27.603218 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-04-16 08:34:27.603228 | orchestrator | Thursday 16 April 2026 08:32:31 +0000 (0:00:04.370) 0:46:38.065 ******** 2026-04-16 08:34:27.603239 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:34:27.603250 | orchestrator | 2026-04-16 08:34:27.603261 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-04-16 08:34:27.603271 | orchestrator | Thursday 16 April 2026 08:32:32 +0000 (0:00:01.240) 0:46:39.305 ******** 2026-04-16 08:34:27.603282 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-04-16 08:34:27.603294 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:34:27.603305 | orchestrator | 2026-04-16 08:34:27.603316 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-04-16 08:34:27.603335 | orchestrator | Thursday 16 April 2026 08:32:45 +0000 (0:00:12.997) 0:46:52.303 ******** 2026-04-16 08:34:27.603346 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:34:27.603356 | orchestrator | 2026-04-16 08:34:27.603367 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-04-16 08:34:27.603378 | orchestrator | Thursday 16 April 2026 08:32:46 +0000 (0:00:00.855) 0:46:53.159 ******** 2026-04-16 08:34:27.603389 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:34:27.603400 | orchestrator | 2026-04-16 08:34:27.603411 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-04-16 08:34:27.603421 | orchestrator | Thursday 16 April 2026 08:32:47 +0000 (0:00:00.792) 0:46:53.952 ******** 2026-04-16 08:34:27.603432 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:34:27.603443 | orchestrator | 2026-04-16 08:34:27.603454 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-04-16 08:34:27.603466 | orchestrator | Thursday 16 April 2026 08:32:47 +0000 (0:00:00.748) 0:46:54.700 ******** 2026-04-16 08:34:27.603486 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-04-16 08:34:27.603506 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:34:27.603523 | orchestrator | 2026-04-16 08:34:27.603590 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-04-16 08:34:27.603613 | orchestrator | 2026-04-16 08:34:27.603630 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:34:27.603641 | orchestrator | Thursday 16 April 2026 08:32:53 +0000 (0:00:05.335) 0:47:00.035 ******** 2026-04-16 08:34:27.603652 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:34:27.603663 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:34:27.603674 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.603685 | orchestrator | 2026-04-16 08:34:27.603696 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:34:27.603707 | orchestrator | Thursday 16 April 2026 08:32:54 +0000 (0:00:01.669) 0:47:01.706 ******** 2026-04-16 08:34:27.603717 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:34:27.603728 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:34:27.603739 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:34:27.603750 | orchestrator | 2026-04-16 08:34:27.603761 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-04-16 08:34:27.603772 | orchestrator | Thursday 16 April 2026 08:32:56 +0000 (0:00:01.601) 0:47:03.308 ******** 2026-04-16 08:34:27.603783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-04-16 08:34:27.603794 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-04-16 08:34:27.603847 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-04-16 08:34:27.603859 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-04-16 08:34:27.603872 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-04-16 08:34:27.603883 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-04-16 08:34:27.603894 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-04-16 08:34:27.603905 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-04-16 08:34:27.603916 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-04-16 08:34:27.603927 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-04-16 08:34:27.603955 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-04-16 08:34:27.603976 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-04-16 08:34:27.603987 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-04-16 08:34:27.603998 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-04-16 08:34:27.604009 | orchestrator | 2026-04-16 08:34:27.604019 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-04-16 08:34:27.604030 | orchestrator | Thursday 16 April 2026 08:34:12 +0000 (0:01:15.953) 0:48:19.261 ******** 2026-04-16 08:34:27.604041 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-04-16 08:34:27.604052 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-04-16 08:34:27.604063 | orchestrator | 2026-04-16 08:34:27.604073 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-04-16 08:34:27.604084 | orchestrator | Thursday 16 April 2026 08:34:18 +0000 (0:00:05.505) 0:48:24.767 ******** 2026-04-16 08:34:27.604095 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:34:27.604106 | orchestrator | 2026-04-16 08:34:27.604117 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-04-16 08:34:27.604127 | orchestrator | 2026-04-16 08:34:27.604138 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:34:27.604149 | orchestrator | Thursday 16 April 2026 08:34:21 +0000 (0:00:03.241) 0:48:28.009 ******** 2026-04-16 08:34:27.604160 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-04-16 08:34:27.604171 | orchestrator | 2026-04-16 08:34:27.604182 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:34:27.604192 | orchestrator | Thursday 16 April 2026 08:34:22 +0000 (0:00:01.154) 0:48:29.163 ******** 2026-04-16 08:34:27.604203 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:27.604214 | orchestrator | 2026-04-16 08:34:27.604225 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:34:27.604236 | orchestrator | Thursday 16 April 2026 08:34:23 +0000 (0:00:01.457) 0:48:30.621 ******** 2026-04-16 08:34:27.604247 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:27.604257 | orchestrator | 2026-04-16 08:34:27.604268 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:34:27.604279 | orchestrator | Thursday 16 April 2026 08:34:24 +0000 (0:00:01.120) 0:48:31.742 ******** 2026-04-16 08:34:27.604290 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:27.604301 | orchestrator | 2026-04-16 08:34:27.604311 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:34:27.604322 | orchestrator | Thursday 16 April 2026 08:34:26 +0000 (0:00:01.403) 0:48:33.145 ******** 2026-04-16 08:34:27.604333 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:27.604344 | orchestrator | 2026-04-16 08:34:27.604354 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:34:27.604365 | orchestrator | Thursday 16 April 2026 08:34:27 +0000 (0:00:01.136) 0:48:34.282 ******** 2026-04-16 08:34:27.604387 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.500352 | orchestrator | 2026-04-16 08:34:52.500474 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:34:52.500492 | orchestrator | Thursday 16 April 2026 08:34:28 +0000 (0:00:01.128) 0:48:35.411 ******** 2026-04-16 08:34:52.500505 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.500518 | orchestrator | 2026-04-16 08:34:52.500530 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:34:52.500542 | orchestrator | Thursday 16 April 2026 08:34:29 +0000 (0:00:01.144) 0:48:36.555 ******** 2026-04-16 08:34:52.500568 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:34:52.500636 | orchestrator | 2026-04-16 08:34:52.500648 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:34:52.500659 | orchestrator | Thursday 16 April 2026 08:34:30 +0000 (0:00:01.156) 0:48:37.711 ******** 2026-04-16 08:34:52.500695 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.500706 | orchestrator | 2026-04-16 08:34:52.500717 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:34:52.500728 | orchestrator | Thursday 16 April 2026 08:34:32 +0000 (0:00:01.142) 0:48:38.853 ******** 2026-04-16 08:34:52.500739 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:34:52.500750 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:34:52.500761 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:34:52.500772 | orchestrator | 2026-04-16 08:34:52.500783 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:34:52.500793 | orchestrator | Thursday 16 April 2026 08:34:33 +0000 (0:00:01.627) 0:48:40.481 ******** 2026-04-16 08:34:52.500804 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.500815 | orchestrator | 2026-04-16 08:34:52.500826 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:34:52.500836 | orchestrator | Thursday 16 April 2026 08:34:34 +0000 (0:00:01.253) 0:48:41.734 ******** 2026-04-16 08:34:52.500847 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:34:52.500858 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:34:52.500869 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:34:52.500880 | orchestrator | 2026-04-16 08:34:52.500893 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:34:52.500905 | orchestrator | Thursday 16 April 2026 08:34:38 +0000 (0:00:03.267) 0:48:45.002 ******** 2026-04-16 08:34:52.500918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 08:34:52.500931 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 08:34:52.500943 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 08:34:52.500957 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:34:52.500969 | orchestrator | 2026-04-16 08:34:52.500981 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:34:52.500993 | orchestrator | Thursday 16 April 2026 08:34:39 +0000 (0:00:01.397) 0:48:46.400 ******** 2026-04-16 08:34:52.501007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:34:52.501022 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:34:52.501034 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:34:52.501045 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:34:52.501056 | orchestrator | 2026-04-16 08:34:52.501067 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:34:52.501078 | orchestrator | Thursday 16 April 2026 08:34:41 +0000 (0:00:01.972) 0:48:48.373 ******** 2026-04-16 08:34:52.501091 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:34:52.501105 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:34:52.501145 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:34:52.501157 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:34:52.501168 | orchestrator | 2026-04-16 08:34:52.501179 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:34:52.501196 | orchestrator | Thursday 16 April 2026 08:34:42 +0000 (0:00:01.195) 0:48:49.568 ******** 2026-04-16 08:34:52.501208 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:34:35.572229', 'end': '2026-04-16 08:34:35.641417', 'delta': '0:00:00.069188', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:34:52.501223 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:34:36.184078', 'end': '2026-04-16 08:34:36.228522', 'delta': '0:00:00.044444', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:34:52.501235 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:34:37.025220', 'end': '2026-04-16 08:34:37.068843', 'delta': '0:00:00.043623', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:34:52.501247 | orchestrator | 2026-04-16 08:34:52.501258 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:34:52.501269 | orchestrator | Thursday 16 April 2026 08:34:44 +0000 (0:00:01.238) 0:48:50.807 ******** 2026-04-16 08:34:52.501280 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.501291 | orchestrator | 2026-04-16 08:34:52.501301 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:34:52.501315 | orchestrator | Thursday 16 April 2026 08:34:45 +0000 (0:00:01.599) 0:48:52.407 ******** 2026-04-16 08:34:52.501343 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:34:52.501362 | orchestrator | 2026-04-16 08:34:52.501381 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:34:52.501398 | orchestrator | Thursday 16 April 2026 08:34:46 +0000 (0:00:01.249) 0:48:53.656 ******** 2026-04-16 08:34:52.501415 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.501433 | orchestrator | 2026-04-16 08:34:52.501450 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:34:52.501468 | orchestrator | Thursday 16 April 2026 08:34:48 +0000 (0:00:01.177) 0:48:54.834 ******** 2026-04-16 08:34:52.501487 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.501505 | orchestrator | 2026-04-16 08:34:52.501523 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:34:52.501544 | orchestrator | Thursday 16 April 2026 08:34:50 +0000 (0:00:01.981) 0:48:56.815 ******** 2026-04-16 08:34:52.501564 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:34:52.501607 | orchestrator | 2026-04-16 08:34:52.501619 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:34:52.501630 | orchestrator | Thursday 16 April 2026 08:34:51 +0000 (0:00:01.156) 0:48:57.972 ******** 2026-04-16 08:34:52.501641 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:34:52.501652 | orchestrator | 2026-04-16 08:34:52.501662 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:34:52.501673 | orchestrator | Thursday 16 April 2026 08:34:52 +0000 (0:00:01.125) 0:48:59.097 ******** 2026-04-16 08:34:52.501695 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.843321 | orchestrator | 2026-04-16 08:35:02.843458 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:35:02.843485 | orchestrator | Thursday 16 April 2026 08:34:53 +0000 (0:00:01.224) 0:49:00.322 ******** 2026-04-16 08:35:02.843502 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844484 | orchestrator | 2026-04-16 08:35:02.844521 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:35:02.844533 | orchestrator | Thursday 16 April 2026 08:34:54 +0000 (0:00:01.115) 0:49:01.438 ******** 2026-04-16 08:35:02.844562 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844573 | orchestrator | 2026-04-16 08:35:02.844622 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:35:02.844636 | orchestrator | Thursday 16 April 2026 08:34:55 +0000 (0:00:01.134) 0:49:02.573 ******** 2026-04-16 08:35:02.844647 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844658 | orchestrator | 2026-04-16 08:35:02.844669 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:35:02.844680 | orchestrator | Thursday 16 April 2026 08:34:56 +0000 (0:00:01.102) 0:49:03.675 ******** 2026-04-16 08:35:02.844692 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844702 | orchestrator | 2026-04-16 08:35:02.844714 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:35:02.844725 | orchestrator | Thursday 16 April 2026 08:34:58 +0000 (0:00:01.129) 0:49:04.805 ******** 2026-04-16 08:35:02.844736 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844747 | orchestrator | 2026-04-16 08:35:02.844758 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:35:02.844769 | orchestrator | Thursday 16 April 2026 08:34:59 +0000 (0:00:01.118) 0:49:05.923 ******** 2026-04-16 08:35:02.844780 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844791 | orchestrator | 2026-04-16 08:35:02.844802 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:35:02.844814 | orchestrator | Thursday 16 April 2026 08:35:00 +0000 (0:00:01.114) 0:49:07.038 ******** 2026-04-16 08:35:02.844825 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.844836 | orchestrator | 2026-04-16 08:35:02.844846 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:35:02.844857 | orchestrator | Thursday 16 April 2026 08:35:01 +0000 (0:00:01.143) 0:49:08.182 ******** 2026-04-16 08:35:02.844893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.844909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.844920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.844934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:35:02.844949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.844984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.845003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.845018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:35:02.845043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.845055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:35:02.845066 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:02.845078 | orchestrator | 2026-04-16 08:35:02.845089 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:35:02.845100 | orchestrator | Thursday 16 April 2026 08:35:02 +0000 (0:00:01.337) 0:49:09.520 ******** 2026-04-16 08:35:02.845121 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.168862 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169058 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-47-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169079 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169097 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169179 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2c911509', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1', 'scsi-SQEMU_QEMU_HARDDISK_2c911509-71b2-4fd0-889a-85a88ccb094b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169217 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169237 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:35:08.169255 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:35:08.169275 | orchestrator | 2026-04-16 08:35:08.169294 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:35:08.169314 | orchestrator | Thursday 16 April 2026 08:35:04 +0000 (0:00:01.240) 0:49:10.760 ******** 2026-04-16 08:35:08.169333 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:35:08.169352 | orchestrator | 2026-04-16 08:35:08.169371 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:35:08.169388 | orchestrator | Thursday 16 April 2026 08:35:05 +0000 (0:00:01.521) 0:49:12.281 ******** 2026-04-16 08:35:08.169407 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:35:08.169426 | orchestrator | 2026-04-16 08:35:08.169445 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:35:08.169463 | orchestrator | Thursday 16 April 2026 08:35:06 +0000 (0:00:01.143) 0:49:13.425 ******** 2026-04-16 08:35:08.169480 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:35:08.169499 | orchestrator | 2026-04-16 08:35:08.169517 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:35:08.169550 | orchestrator | Thursday 16 April 2026 08:35:08 +0000 (0:00:01.494) 0:49:14.919 ******** 2026-04-16 08:36:01.988199 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:36:01.988344 | orchestrator | 2026-04-16 08:36:01.988376 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:36:01.988389 | orchestrator | Thursday 16 April 2026 08:35:09 +0000 (0:00:01.118) 0:49:16.038 ******** 2026-04-16 08:36:01.988401 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:36:01.988412 | orchestrator | 2026-04-16 08:36:01.988423 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:36:01.988434 | orchestrator | Thursday 16 April 2026 08:35:10 +0000 (0:00:01.253) 0:49:17.292 ******** 2026-04-16 08:36:01.988445 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:36:01.988456 | orchestrator | 2026-04-16 08:36:01.988467 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:36:01.988478 | orchestrator | Thursday 16 April 2026 08:35:11 +0000 (0:00:01.185) 0:49:18.477 ******** 2026-04-16 08:36:01.988489 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:36:01.988501 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 08:36:01.988512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 08:36:01.988522 | orchestrator | 2026-04-16 08:36:01.988533 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:36:01.988544 | orchestrator | Thursday 16 April 2026 08:35:13 +0000 (0:00:01.975) 0:49:20.452 ******** 2026-04-16 08:36:01.988555 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-04-16 08:36:01.988566 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-04-16 08:36:01.988577 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-04-16 08:36:01.988588 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:36:01.988599 | orchestrator | 2026-04-16 08:36:01.988610 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:36:01.988694 | orchestrator | Thursday 16 April 2026 08:35:14 +0000 (0:00:01.187) 0:49:21.640 ******** 2026-04-16 08:36:01.988707 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:36:01.988718 | orchestrator | 2026-04-16 08:36:01.988730 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:36:01.988744 | orchestrator | Thursday 16 April 2026 08:35:16 +0000 (0:00:01.118) 0:49:22.759 ******** 2026-04-16 08:36:01.988757 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:36:01.988769 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:36:01.988783 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:36:01.988796 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:36:01.988809 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:36:01.988821 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:36:01.988833 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:36:01.988846 | orchestrator | 2026-04-16 08:36:01.988859 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:36:01.988872 | orchestrator | Thursday 16 April 2026 08:35:18 +0000 (0:00:02.098) 0:49:24.857 ******** 2026-04-16 08:36:01.988885 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 08:36:01.988897 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:36:01.988909 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:36:01.988922 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:36:01.988935 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:36:01.988948 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:36:01.988960 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:36:01.988980 | orchestrator | 2026-04-16 08:36:01.988991 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-04-16 08:36:01.989002 | orchestrator | Thursday 16 April 2026 08:35:20 +0000 (0:00:02.818) 0:49:27.676 ******** 2026-04-16 08:36:01.989042 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:36:01.989053 | orchestrator | 2026-04-16 08:36:01.989064 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-04-16 08:36:01.989075 | orchestrator | Thursday 16 April 2026 08:35:24 +0000 (0:00:03.229) 0:49:30.905 ******** 2026-04-16 08:36:01.989086 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:36:01.989097 | orchestrator | 2026-04-16 08:36:01.989108 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-04-16 08:36:01.989119 | orchestrator | Thursday 16 April 2026 08:35:27 +0000 (0:00:02.990) 0:49:33.896 ******** 2026-04-16 08:36:01.989130 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:36:01.989141 | orchestrator | 2026-04-16 08:36:01.989152 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-04-16 08:36:01.989163 | orchestrator | Thursday 16 April 2026 08:35:29 +0000 (0:00:02.129) 0:49:36.026 ******** 2026-04-16 08:36:01.989203 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_14735', 'value': {'gid': 14735, 'name': 'testbed-node-5', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.15:6817/2289351959', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.15:6816', 'nonce': 2289351959}, {'type': 'v1', 'addr': '192.168.16.15:6817', 'nonce': 2289351959}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-04-16 08:36:01.989221 | orchestrator | 2026-04-16 08:36:01.989232 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-04-16 08:36:01.989243 | orchestrator | Thursday 16 April 2026 08:35:30 +0000 (0:00:01.181) 0:49:37.207 ******** 2026-04-16 08:36:01.989254 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-04-16 08:36:01.989265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-04-16 08:36:01.989276 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-5) 2026-04-16 08:36:01.989287 | orchestrator | 2026-04-16 08:36:01.989298 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-04-16 08:36:01.989309 | orchestrator | Thursday 16 April 2026 08:35:31 +0000 (0:00:01.501) 0:49:38.709 ******** 2026-04-16 08:36:01.989320 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-04-16 08:36:01.989331 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-4) 2026-04-16 08:36:01.989342 | orchestrator | 2026-04-16 08:36:01.989352 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-04-16 08:36:01.989363 | orchestrator | Thursday 16 April 2026 08:35:33 +0000 (0:00:01.518) 0:49:40.228 ******** 2026-04-16 08:36:01.989374 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:36:01.989385 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:36:01.989396 | orchestrator | 2026-04-16 08:36:01.989407 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-04-16 08:36:01.989418 | orchestrator | Thursday 16 April 2026 08:35:44 +0000 (0:00:10.911) 0:49:51.140 ******** 2026-04-16 08:36:01.989428 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:36:01.989439 | orchestrator | changed: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:36:01.989458 | orchestrator | 2026-04-16 08:36:01.989469 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-04-16 08:36:01.989480 | orchestrator | Thursday 16 April 2026 08:35:48 +0000 (0:00:03.909) 0:49:55.050 ******** 2026-04-16 08:36:01.989490 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:36:01.989501 | orchestrator | 2026-04-16 08:36:01.989512 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-04-16 08:36:01.989523 | orchestrator | Thursday 16 April 2026 08:35:50 +0000 (0:00:02.243) 0:49:57.294 ******** 2026-04-16 08:36:01.989534 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:36:01.989545 | orchestrator | 2026-04-16 08:36:01.989556 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-04-16 08:36:01.989567 | orchestrator | 2026-04-16 08:36:01.989578 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:36:01.989589 | orchestrator | Thursday 16 April 2026 08:35:52 +0000 (0:00:01.522) 0:49:58.816 ******** 2026-04-16 08:36:01.989599 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-16 08:36:01.989610 | orchestrator | 2026-04-16 08:36:01.989640 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:36:01.989651 | orchestrator | Thursday 16 April 2026 08:35:53 +0000 (0:00:01.266) 0:50:00.082 ******** 2026-04-16 08:36:01.989662 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989673 | orchestrator | 2026-04-16 08:36:01.989684 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:36:01.989695 | orchestrator | Thursday 16 April 2026 08:35:54 +0000 (0:00:01.434) 0:50:01.516 ******** 2026-04-16 08:36:01.989706 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989717 | orchestrator | 2026-04-16 08:36:01.989727 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:36:01.989738 | orchestrator | Thursday 16 April 2026 08:35:55 +0000 (0:00:01.160) 0:50:02.677 ******** 2026-04-16 08:36:01.989749 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989760 | orchestrator | 2026-04-16 08:36:01.989771 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:36:01.989781 | orchestrator | Thursday 16 April 2026 08:35:57 +0000 (0:00:01.448) 0:50:04.126 ******** 2026-04-16 08:36:01.989792 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989803 | orchestrator | 2026-04-16 08:36:01.989814 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:36:01.989825 | orchestrator | Thursday 16 April 2026 08:35:58 +0000 (0:00:01.121) 0:50:05.247 ******** 2026-04-16 08:36:01.989835 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989846 | orchestrator | 2026-04-16 08:36:01.989857 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:36:01.989877 | orchestrator | Thursday 16 April 2026 08:35:59 +0000 (0:00:01.122) 0:50:06.370 ******** 2026-04-16 08:36:01.989889 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989900 | orchestrator | 2026-04-16 08:36:01.989910 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:36:01.989921 | orchestrator | Thursday 16 April 2026 08:36:00 +0000 (0:00:01.130) 0:50:07.500 ******** 2026-04-16 08:36:01.989932 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:01.989943 | orchestrator | 2026-04-16 08:36:01.989954 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:36:01.989965 | orchestrator | Thursday 16 April 2026 08:36:01 +0000 (0:00:01.103) 0:50:08.604 ******** 2026-04-16 08:36:01.989975 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:01.989986 | orchestrator | 2026-04-16 08:36:01.990010 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:36:26.273793 | orchestrator | Thursday 16 April 2026 08:36:02 +0000 (0:00:01.106) 0:50:09.710 ******** 2026-04-16 08:36:26.273930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:36:26.273953 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:36:26.273988 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:36:26.273998 | orchestrator | 2026-04-16 08:36:26.274008 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:36:26.274067 | orchestrator | Thursday 16 April 2026 08:36:04 +0000 (0:00:01.908) 0:50:11.619 ******** 2026-04-16 08:36:26.274078 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:26.274089 | orchestrator | 2026-04-16 08:36:26.274099 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:36:26.274107 | orchestrator | Thursday 16 April 2026 08:36:06 +0000 (0:00:01.245) 0:50:12.865 ******** 2026-04-16 08:36:26.274116 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:36:26.274125 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:36:26.274134 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:36:26.274143 | orchestrator | 2026-04-16 08:36:26.274152 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:36:26.274160 | orchestrator | Thursday 16 April 2026 08:36:09 +0000 (0:00:03.145) 0:50:16.011 ******** 2026-04-16 08:36:26.274170 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 08:36:26.274179 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 08:36:26.274188 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 08:36:26.274197 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274206 | orchestrator | 2026-04-16 08:36:26.274215 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:36:26.274223 | orchestrator | Thursday 16 April 2026 08:36:10 +0000 (0:00:01.711) 0:50:17.723 ******** 2026-04-16 08:36:26.274233 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:36:26.274245 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:36:26.274255 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:36:26.274265 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274275 | orchestrator | 2026-04-16 08:36:26.274285 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:36:26.274296 | orchestrator | Thursday 16 April 2026 08:36:12 +0000 (0:00:01.620) 0:50:19.343 ******** 2026-04-16 08:36:26.274309 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:26.274322 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:26.274333 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:26.274350 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274361 | orchestrator | 2026-04-16 08:36:26.274371 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:36:26.274379 | orchestrator | Thursday 16 April 2026 08:36:13 +0000 (0:00:01.132) 0:50:20.476 ******** 2026-04-16 08:36:26.274430 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:36:06.949976', 'end': '2026-04-16 08:36:06.998107', 'delta': '0:00:00.048131', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:36:26.274443 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:36:07.509841', 'end': '2026-04-16 08:36:07.562422', 'delta': '0:00:00.052581', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:36:26.274452 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:36:08.073744', 'end': '2026-04-16 08:36:08.118785', 'delta': '0:00:00.045041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:36:26.274462 | orchestrator | 2026-04-16 08:36:26.274471 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:36:26.274479 | orchestrator | Thursday 16 April 2026 08:36:14 +0000 (0:00:01.211) 0:50:21.688 ******** 2026-04-16 08:36:26.274488 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:26.274497 | orchestrator | 2026-04-16 08:36:26.274505 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:36:26.274514 | orchestrator | Thursday 16 April 2026 08:36:16 +0000 (0:00:01.217) 0:50:22.905 ******** 2026-04-16 08:36:26.274523 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274531 | orchestrator | 2026-04-16 08:36:26.274540 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:36:26.274548 | orchestrator | Thursday 16 April 2026 08:36:17 +0000 (0:00:01.253) 0:50:24.159 ******** 2026-04-16 08:36:26.274557 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:26.274566 | orchestrator | 2026-04-16 08:36:26.274574 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:36:26.274583 | orchestrator | Thursday 16 April 2026 08:36:18 +0000 (0:00:01.141) 0:50:25.301 ******** 2026-04-16 08:36:26.274597 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:36:26.274605 | orchestrator | 2026-04-16 08:36:26.274614 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:36:26.274623 | orchestrator | Thursday 16 April 2026 08:36:20 +0000 (0:00:01.948) 0:50:27.249 ******** 2026-04-16 08:36:26.274632 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:26.274665 | orchestrator | 2026-04-16 08:36:26.274675 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:36:26.274684 | orchestrator | Thursday 16 April 2026 08:36:21 +0000 (0:00:01.122) 0:50:28.372 ******** 2026-04-16 08:36:26.274692 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274701 | orchestrator | 2026-04-16 08:36:26.274710 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:36:26.274718 | orchestrator | Thursday 16 April 2026 08:36:22 +0000 (0:00:01.096) 0:50:29.468 ******** 2026-04-16 08:36:26.274727 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274735 | orchestrator | 2026-04-16 08:36:26.274744 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:36:26.274752 | orchestrator | Thursday 16 April 2026 08:36:23 +0000 (0:00:01.195) 0:50:30.664 ******** 2026-04-16 08:36:26.274762 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274770 | orchestrator | 2026-04-16 08:36:26.274779 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:36:26.274787 | orchestrator | Thursday 16 April 2026 08:36:25 +0000 (0:00:01.143) 0:50:31.808 ******** 2026-04-16 08:36:26.274796 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:26.274805 | orchestrator | 2026-04-16 08:36:26.274813 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:36:26.274826 | orchestrator | Thursday 16 April 2026 08:36:26 +0000 (0:00:01.122) 0:50:32.930 ******** 2026-04-16 08:36:26.274842 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:32.062123 | orchestrator | 2026-04-16 08:36:32.062235 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:36:32.062252 | orchestrator | Thursday 16 April 2026 08:36:27 +0000 (0:00:01.140) 0:50:34.071 ******** 2026-04-16 08:36:32.062264 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:32.062277 | orchestrator | 2026-04-16 08:36:32.062288 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:36:32.062299 | orchestrator | Thursday 16 April 2026 08:36:28 +0000 (0:00:01.110) 0:50:35.181 ******** 2026-04-16 08:36:32.062310 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:32.062322 | orchestrator | 2026-04-16 08:36:32.062333 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:36:32.062344 | orchestrator | Thursday 16 April 2026 08:36:29 +0000 (0:00:01.162) 0:50:36.344 ******** 2026-04-16 08:36:32.062356 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:32.062367 | orchestrator | 2026-04-16 08:36:32.062378 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:36:32.062390 | orchestrator | Thursday 16 April 2026 08:36:30 +0000 (0:00:01.110) 0:50:37.454 ******** 2026-04-16 08:36:32.062401 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:36:32.062412 | orchestrator | 2026-04-16 08:36:32.062422 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:36:32.062433 | orchestrator | Thursday 16 April 2026 08:36:31 +0000 (0:00:01.153) 0:50:38.608 ******** 2026-04-16 08:36:32.062447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:32.062462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}})  2026-04-16 08:36:32.062504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:36:32.062517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}})  2026-04-16 08:36:32.062530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:32.062576 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:32.062590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:36:32.062606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:32.062628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:36:32.062669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:32.062686 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}})  2026-04-16 08:36:32.062700 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}})  2026-04-16 08:36:32.062727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:33.405198 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:36:33.405322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:33.405348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:36:33.405369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:36:33.405388 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:36:33.405407 | orchestrator | 2026-04-16 08:36:33.405425 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:36:33.405443 | orchestrator | Thursday 16 April 2026 08:36:33 +0000 (0:00:01.398) 0:50:40.007 ******** 2026-04-16 08:36:33.405537 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.405565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.405593 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.405606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.405618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.405671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.519786 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.519941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.519961 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.519978 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.519993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.520046 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.520113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.520133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.520154 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:36:33.520173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:37:08.671545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:37:08.671840 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.671881 | orchestrator | 2026-04-16 08:37:08.671905 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:37:08.671927 | orchestrator | Thursday 16 April 2026 08:36:34 +0000 (0:00:01.374) 0:50:41.381 ******** 2026-04-16 08:37:08.671949 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:08.671971 | orchestrator | 2026-04-16 08:37:08.671991 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:37:08.672011 | orchestrator | Thursday 16 April 2026 08:36:36 +0000 (0:00:01.462) 0:50:42.843 ******** 2026-04-16 08:37:08.672031 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:08.672046 | orchestrator | 2026-04-16 08:37:08.672057 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:37:08.672068 | orchestrator | Thursday 16 April 2026 08:36:37 +0000 (0:00:01.103) 0:50:43.947 ******** 2026-04-16 08:37:08.672081 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:08.672094 | orchestrator | 2026-04-16 08:37:08.672107 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:37:08.672120 | orchestrator | Thursday 16 April 2026 08:36:38 +0000 (0:00:01.471) 0:50:45.418 ******** 2026-04-16 08:37:08.672132 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672145 | orchestrator | 2026-04-16 08:37:08.672159 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:37:08.672171 | orchestrator | Thursday 16 April 2026 08:36:39 +0000 (0:00:01.095) 0:50:46.514 ******** 2026-04-16 08:37:08.672182 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672193 | orchestrator | 2026-04-16 08:37:08.672203 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:37:08.672214 | orchestrator | Thursday 16 April 2026 08:36:40 +0000 (0:00:01.199) 0:50:47.714 ******** 2026-04-16 08:37:08.672225 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672235 | orchestrator | 2026-04-16 08:37:08.672246 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:37:08.672257 | orchestrator | Thursday 16 April 2026 08:36:42 +0000 (0:00:01.143) 0:50:48.858 ******** 2026-04-16 08:37:08.672268 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-16 08:37:08.672279 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-16 08:37:08.672290 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-16 08:37:08.672300 | orchestrator | 2026-04-16 08:37:08.672311 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:37:08.672322 | orchestrator | Thursday 16 April 2026 08:36:44 +0000 (0:00:02.029) 0:50:50.888 ******** 2026-04-16 08:37:08.672333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 08:37:08.672344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 08:37:08.672382 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 08:37:08.672393 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672405 | orchestrator | 2026-04-16 08:37:08.672415 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:37:08.672426 | orchestrator | Thursday 16 April 2026 08:36:45 +0000 (0:00:01.146) 0:50:52.034 ******** 2026-04-16 08:37:08.672437 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-16 08:37:08.672448 | orchestrator | 2026-04-16 08:37:08.672464 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:37:08.672501 | orchestrator | Thursday 16 April 2026 08:36:46 +0000 (0:00:01.085) 0:50:53.120 ******** 2026-04-16 08:37:08.672521 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672538 | orchestrator | 2026-04-16 08:37:08.672556 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:37:08.672575 | orchestrator | Thursday 16 April 2026 08:36:47 +0000 (0:00:01.146) 0:50:54.266 ******** 2026-04-16 08:37:08.672594 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672613 | orchestrator | 2026-04-16 08:37:08.672626 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:37:08.672637 | orchestrator | Thursday 16 April 2026 08:36:48 +0000 (0:00:01.183) 0:50:55.450 ******** 2026-04-16 08:37:08.672648 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672661 | orchestrator | 2026-04-16 08:37:08.672729 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:37:08.672740 | orchestrator | Thursday 16 April 2026 08:36:49 +0000 (0:00:01.146) 0:50:56.596 ******** 2026-04-16 08:37:08.672751 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:08.672762 | orchestrator | 2026-04-16 08:37:08.672773 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:37:08.672784 | orchestrator | Thursday 16 April 2026 08:36:51 +0000 (0:00:01.234) 0:50:57.831 ******** 2026-04-16 08:37:08.672795 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:37:08.672830 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:37:08.672842 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:37:08.672853 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672864 | orchestrator | 2026-04-16 08:37:08.672875 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:37:08.672885 | orchestrator | Thursday 16 April 2026 08:36:52 +0000 (0:00:01.345) 0:50:59.176 ******** 2026-04-16 08:37:08.672896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:37:08.672907 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:37:08.672918 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:37:08.672929 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.672939 | orchestrator | 2026-04-16 08:37:08.672950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:37:08.672961 | orchestrator | Thursday 16 April 2026 08:36:53 +0000 (0:00:01.423) 0:51:00.599 ******** 2026-04-16 08:37:08.672972 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:37:08.672983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:37:08.672993 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:37:08.673004 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.673015 | orchestrator | 2026-04-16 08:37:08.673026 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:37:08.673036 | orchestrator | Thursday 16 April 2026 08:36:55 +0000 (0:00:01.392) 0:51:01.992 ******** 2026-04-16 08:37:08.673047 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:08.673058 | orchestrator | 2026-04-16 08:37:08.673069 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:37:08.673092 | orchestrator | Thursday 16 April 2026 08:36:56 +0000 (0:00:01.125) 0:51:03.118 ******** 2026-04-16 08:37:08.673103 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 08:37:08.673114 | orchestrator | 2026-04-16 08:37:08.673124 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:37:08.673135 | orchestrator | Thursday 16 April 2026 08:36:58 +0000 (0:00:01.683) 0:51:04.802 ******** 2026-04-16 08:37:08.673146 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:37:08.673157 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:37:08.673174 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:37:08.673192 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:37:08.673211 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:37:08.673229 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-16 08:37:08.673247 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:37:08.673266 | orchestrator | 2026-04-16 08:37:08.673286 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:37:08.673305 | orchestrator | Thursday 16 April 2026 08:37:00 +0000 (0:00:02.070) 0:51:06.872 ******** 2026-04-16 08:37:08.673323 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:37:08.673338 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:37:08.673349 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:37:08.673359 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:37:08.673370 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:37:08.673381 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-16 08:37:08.673391 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:37:08.673402 | orchestrator | 2026-04-16 08:37:08.673413 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-04-16 08:37:08.673423 | orchestrator | Thursday 16 April 2026 08:37:02 +0000 (0:00:02.536) 0:51:09.408 ******** 2026-04-16 08:37:08.673434 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.673444 | orchestrator | 2026-04-16 08:37:08.673455 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:37:08.673473 | orchestrator | Thursday 16 April 2026 08:37:03 +0000 (0:00:01.163) 0:51:10.572 ******** 2026-04-16 08:37:08.673484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-16 08:37:08.673495 | orchestrator | 2026-04-16 08:37:08.673506 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:37:08.673517 | orchestrator | Thursday 16 April 2026 08:37:04 +0000 (0:00:01.093) 0:51:11.665 ******** 2026-04-16 08:37:08.673527 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-16 08:37:08.673538 | orchestrator | 2026-04-16 08:37:08.673549 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:37:08.673559 | orchestrator | Thursday 16 April 2026 08:37:06 +0000 (0:00:01.119) 0:51:12.785 ******** 2026-04-16 08:37:08.673570 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:08.673581 | orchestrator | 2026-04-16 08:37:08.673592 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:37:08.673602 | orchestrator | Thursday 16 April 2026 08:37:07 +0000 (0:00:01.097) 0:51:13.882 ******** 2026-04-16 08:37:08.673613 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:08.673624 | orchestrator | 2026-04-16 08:37:08.673634 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:37:08.673790 | orchestrator | Thursday 16 April 2026 08:37:08 +0000 (0:00:01.533) 0:51:15.416 ******** 2026-04-16 08:37:58.364893 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365050 | orchestrator | 2026-04-16 08:37:58.365070 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:37:58.365084 | orchestrator | Thursday 16 April 2026 08:37:10 +0000 (0:00:01.517) 0:51:16.934 ******** 2026-04-16 08:37:58.365095 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365106 | orchestrator | 2026-04-16 08:37:58.365117 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:37:58.365128 | orchestrator | Thursday 16 April 2026 08:37:11 +0000 (0:00:01.579) 0:51:18.513 ******** 2026-04-16 08:37:58.365140 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365152 | orchestrator | 2026-04-16 08:37:58.365163 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:37:58.365174 | orchestrator | Thursday 16 April 2026 08:37:12 +0000 (0:00:01.124) 0:51:19.637 ******** 2026-04-16 08:37:58.365185 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365196 | orchestrator | 2026-04-16 08:37:58.365207 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:37:58.365218 | orchestrator | Thursday 16 April 2026 08:37:14 +0000 (0:00:01.161) 0:51:20.799 ******** 2026-04-16 08:37:58.365229 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365240 | orchestrator | 2026-04-16 08:37:58.365251 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:37:58.365261 | orchestrator | Thursday 16 April 2026 08:37:15 +0000 (0:00:01.127) 0:51:21.926 ******** 2026-04-16 08:37:58.365272 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365283 | orchestrator | 2026-04-16 08:37:58.365294 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:37:58.365305 | orchestrator | Thursday 16 April 2026 08:37:16 +0000 (0:00:01.506) 0:51:23.433 ******** 2026-04-16 08:37:58.365316 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365327 | orchestrator | 2026-04-16 08:37:58.365338 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:37:58.365348 | orchestrator | Thursday 16 April 2026 08:37:18 +0000 (0:00:01.485) 0:51:24.918 ******** 2026-04-16 08:37:58.365359 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365370 | orchestrator | 2026-04-16 08:37:58.365381 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:37:58.365392 | orchestrator | Thursday 16 April 2026 08:37:19 +0000 (0:00:01.118) 0:51:26.037 ******** 2026-04-16 08:37:58.365403 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365416 | orchestrator | 2026-04-16 08:37:58.365429 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:37:58.365441 | orchestrator | Thursday 16 April 2026 08:37:20 +0000 (0:00:01.103) 0:51:27.141 ******** 2026-04-16 08:37:58.365453 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365466 | orchestrator | 2026-04-16 08:37:58.365478 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:37:58.365490 | orchestrator | Thursday 16 April 2026 08:37:21 +0000 (0:00:01.111) 0:51:28.253 ******** 2026-04-16 08:37:58.365503 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365515 | orchestrator | 2026-04-16 08:37:58.365528 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:37:58.365540 | orchestrator | Thursday 16 April 2026 08:37:22 +0000 (0:00:01.120) 0:51:29.373 ******** 2026-04-16 08:37:58.365553 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365566 | orchestrator | 2026-04-16 08:37:58.365578 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:37:58.365590 | orchestrator | Thursday 16 April 2026 08:37:23 +0000 (0:00:01.132) 0:51:30.506 ******** 2026-04-16 08:37:58.365601 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365612 | orchestrator | 2026-04-16 08:37:58.365623 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:37:58.365662 | orchestrator | Thursday 16 April 2026 08:37:24 +0000 (0:00:01.110) 0:51:31.617 ******** 2026-04-16 08:37:58.365674 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365685 | orchestrator | 2026-04-16 08:37:58.365774 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:37:58.365791 | orchestrator | Thursday 16 April 2026 08:37:25 +0000 (0:00:01.118) 0:51:32.736 ******** 2026-04-16 08:37:58.365803 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365814 | orchestrator | 2026-04-16 08:37:58.365826 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:37:58.365837 | orchestrator | Thursday 16 April 2026 08:37:27 +0000 (0:00:01.100) 0:51:33.836 ******** 2026-04-16 08:37:58.365848 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365859 | orchestrator | 2026-04-16 08:37:58.365870 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:37:58.365897 | orchestrator | Thursday 16 April 2026 08:37:28 +0000 (0:00:01.174) 0:51:35.011 ******** 2026-04-16 08:37:58.365909 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.365920 | orchestrator | 2026-04-16 08:37:58.365931 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:37:58.365942 | orchestrator | Thursday 16 April 2026 08:37:29 +0000 (0:00:01.201) 0:51:36.212 ******** 2026-04-16 08:37:58.365953 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.365964 | orchestrator | 2026-04-16 08:37:58.365975 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:37:58.365987 | orchestrator | Thursday 16 April 2026 08:37:30 +0000 (0:00:01.127) 0:51:37.340 ******** 2026-04-16 08:37:58.365998 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366009 | orchestrator | 2026-04-16 08:37:58.366081 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:37:58.366093 | orchestrator | Thursday 16 April 2026 08:37:31 +0000 (0:00:01.124) 0:51:38.464 ******** 2026-04-16 08:37:58.366104 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366115 | orchestrator | 2026-04-16 08:37:58.366126 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:37:58.366137 | orchestrator | Thursday 16 April 2026 08:37:32 +0000 (0:00:01.159) 0:51:39.623 ******** 2026-04-16 08:37:58.366148 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366159 | orchestrator | 2026-04-16 08:37:58.366169 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:37:58.366200 | orchestrator | Thursday 16 April 2026 08:37:33 +0000 (0:00:01.128) 0:51:40.752 ******** 2026-04-16 08:37:58.366212 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366223 | orchestrator | 2026-04-16 08:37:58.366234 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:37:58.366245 | orchestrator | Thursday 16 April 2026 08:37:35 +0000 (0:00:01.105) 0:51:41.858 ******** 2026-04-16 08:37:58.366256 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366267 | orchestrator | 2026-04-16 08:37:58.366277 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:37:58.366288 | orchestrator | Thursday 16 April 2026 08:37:36 +0000 (0:00:01.099) 0:51:42.958 ******** 2026-04-16 08:37:58.366299 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366310 | orchestrator | 2026-04-16 08:37:58.366321 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:37:58.366332 | orchestrator | Thursday 16 April 2026 08:37:37 +0000 (0:00:01.113) 0:51:44.072 ******** 2026-04-16 08:37:58.366343 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366354 | orchestrator | 2026-04-16 08:37:58.366365 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:37:58.366375 | orchestrator | Thursday 16 April 2026 08:37:38 +0000 (0:00:01.094) 0:51:45.166 ******** 2026-04-16 08:37:58.366386 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366397 | orchestrator | 2026-04-16 08:37:58.366418 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:37:58.366430 | orchestrator | Thursday 16 April 2026 08:37:39 +0000 (0:00:01.080) 0:51:46.247 ******** 2026-04-16 08:37:58.366441 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366453 | orchestrator | 2026-04-16 08:37:58.366464 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:37:58.366476 | orchestrator | Thursday 16 April 2026 08:37:40 +0000 (0:00:01.117) 0:51:47.364 ******** 2026-04-16 08:37:58.366487 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366499 | orchestrator | 2026-04-16 08:37:58.366511 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:37:58.366522 | orchestrator | Thursday 16 April 2026 08:37:41 +0000 (0:00:01.111) 0:51:48.476 ******** 2026-04-16 08:37:58.366534 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366545 | orchestrator | 2026-04-16 08:37:58.366557 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:37:58.366568 | orchestrator | Thursday 16 April 2026 08:37:42 +0000 (0:00:01.167) 0:51:49.644 ******** 2026-04-16 08:37:58.366580 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.366591 | orchestrator | 2026-04-16 08:37:58.366603 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:37:58.366614 | orchestrator | Thursday 16 April 2026 08:37:44 +0000 (0:00:01.935) 0:51:51.580 ******** 2026-04-16 08:37:58.366626 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.366637 | orchestrator | 2026-04-16 08:37:58.366649 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:37:58.366661 | orchestrator | Thursday 16 April 2026 08:37:47 +0000 (0:00:02.201) 0:51:53.781 ******** 2026-04-16 08:37:58.366672 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-16 08:37:58.366685 | orchestrator | 2026-04-16 08:37:58.366720 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:37:58.366739 | orchestrator | Thursday 16 April 2026 08:37:48 +0000 (0:00:01.093) 0:51:54.875 ******** 2026-04-16 08:37:58.366757 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366774 | orchestrator | 2026-04-16 08:37:58.366786 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:37:58.366796 | orchestrator | Thursday 16 April 2026 08:37:49 +0000 (0:00:01.122) 0:51:55.998 ******** 2026-04-16 08:37:58.366807 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366818 | orchestrator | 2026-04-16 08:37:58.366829 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:37:58.366840 | orchestrator | Thursday 16 April 2026 08:37:50 +0000 (0:00:01.141) 0:51:57.139 ******** 2026-04-16 08:37:58.366850 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:37:58.366861 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:37:58.366871 | orchestrator | 2026-04-16 08:37:58.366883 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:37:58.366893 | orchestrator | Thursday 16 April 2026 08:37:52 +0000 (0:00:01.776) 0:51:58.916 ******** 2026-04-16 08:37:58.366904 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:37:58.366915 | orchestrator | 2026-04-16 08:37:58.366932 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:37:58.366943 | orchestrator | Thursday 16 April 2026 08:37:53 +0000 (0:00:01.461) 0:52:00.378 ******** 2026-04-16 08:37:58.366954 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.366965 | orchestrator | 2026-04-16 08:37:58.366976 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:37:58.366987 | orchestrator | Thursday 16 April 2026 08:37:54 +0000 (0:00:01.109) 0:52:01.487 ******** 2026-04-16 08:37:58.366998 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.367008 | orchestrator | 2026-04-16 08:37:58.367020 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:37:58.367037 | orchestrator | Thursday 16 April 2026 08:37:55 +0000 (0:00:01.227) 0:52:02.715 ******** 2026-04-16 08:37:58.367048 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:37:58.367059 | orchestrator | 2026-04-16 08:37:58.367070 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:37:58.367080 | orchestrator | Thursday 16 April 2026 08:37:57 +0000 (0:00:01.158) 0:52:03.874 ******** 2026-04-16 08:37:58.367091 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-16 08:37:58.367102 | orchestrator | 2026-04-16 08:37:58.367113 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:37:58.367131 | orchestrator | Thursday 16 April 2026 08:37:58 +0000 (0:00:01.235) 0:52:05.109 ******** 2026-04-16 08:38:43.294186 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:38:43.294288 | orchestrator | 2026-04-16 08:38:43.294301 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:38:43.294312 | orchestrator | Thursday 16 April 2026 08:38:00 +0000 (0:00:01.704) 0:52:06.813 ******** 2026-04-16 08:38:43.294322 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:38:43.294331 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:38:43.294340 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:38:43.294349 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294358 | orchestrator | 2026-04-16 08:38:43.294367 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:38:43.294376 | orchestrator | Thursday 16 April 2026 08:38:01 +0000 (0:00:01.113) 0:52:07.927 ******** 2026-04-16 08:38:43.294385 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294393 | orchestrator | 2026-04-16 08:38:43.294402 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:38:43.294410 | orchestrator | Thursday 16 April 2026 08:38:02 +0000 (0:00:01.147) 0:52:09.075 ******** 2026-04-16 08:38:43.294419 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294428 | orchestrator | 2026-04-16 08:38:43.294436 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:38:43.294445 | orchestrator | Thursday 16 April 2026 08:38:03 +0000 (0:00:01.123) 0:52:10.199 ******** 2026-04-16 08:38:43.294454 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294462 | orchestrator | 2026-04-16 08:38:43.294471 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:38:43.294479 | orchestrator | Thursday 16 April 2026 08:38:04 +0000 (0:00:01.102) 0:52:11.302 ******** 2026-04-16 08:38:43.294488 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294497 | orchestrator | 2026-04-16 08:38:43.294505 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:38:43.294514 | orchestrator | Thursday 16 April 2026 08:38:05 +0000 (0:00:01.143) 0:52:12.446 ******** 2026-04-16 08:38:43.294522 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294531 | orchestrator | 2026-04-16 08:38:43.294540 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:38:43.294548 | orchestrator | Thursday 16 April 2026 08:38:06 +0000 (0:00:01.132) 0:52:13.578 ******** 2026-04-16 08:38:43.294557 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:38:43.294566 | orchestrator | 2026-04-16 08:38:43.294574 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:38:43.294583 | orchestrator | Thursday 16 April 2026 08:38:09 +0000 (0:00:02.521) 0:52:16.100 ******** 2026-04-16 08:38:43.294591 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:38:43.294600 | orchestrator | 2026-04-16 08:38:43.294609 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:38:43.294617 | orchestrator | Thursday 16 April 2026 08:38:10 +0000 (0:00:01.119) 0:52:17.220 ******** 2026-04-16 08:38:43.294626 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-16 08:38:43.294658 | orchestrator | 2026-04-16 08:38:43.294667 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:38:43.294676 | orchestrator | Thursday 16 April 2026 08:38:11 +0000 (0:00:01.133) 0:52:18.353 ******** 2026-04-16 08:38:43.294685 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294693 | orchestrator | 2026-04-16 08:38:43.294702 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:38:43.294711 | orchestrator | Thursday 16 April 2026 08:38:12 +0000 (0:00:01.158) 0:52:19.512 ******** 2026-04-16 08:38:43.294719 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294787 | orchestrator | 2026-04-16 08:38:43.294797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:38:43.294807 | orchestrator | Thursday 16 April 2026 08:38:13 +0000 (0:00:01.147) 0:52:20.659 ******** 2026-04-16 08:38:43.294817 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294828 | orchestrator | 2026-04-16 08:38:43.294838 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:38:43.294847 | orchestrator | Thursday 16 April 2026 08:38:15 +0000 (0:00:01.114) 0:52:21.774 ******** 2026-04-16 08:38:43.294857 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294867 | orchestrator | 2026-04-16 08:38:43.294877 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:38:43.294899 | orchestrator | Thursday 16 April 2026 08:38:16 +0000 (0:00:01.118) 0:52:22.893 ******** 2026-04-16 08:38:43.294909 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294919 | orchestrator | 2026-04-16 08:38:43.294929 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:38:43.294939 | orchestrator | Thursday 16 April 2026 08:38:17 +0000 (0:00:01.134) 0:52:24.027 ******** 2026-04-16 08:38:43.294948 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.294958 | orchestrator | 2026-04-16 08:38:43.294968 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:38:43.294979 | orchestrator | Thursday 16 April 2026 08:38:18 +0000 (0:00:01.136) 0:52:25.163 ******** 2026-04-16 08:38:43.294988 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295009 | orchestrator | 2026-04-16 08:38:43.295020 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:38:43.295030 | orchestrator | Thursday 16 April 2026 08:38:19 +0000 (0:00:01.107) 0:52:26.271 ******** 2026-04-16 08:38:43.295040 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295050 | orchestrator | 2026-04-16 08:38:43.295059 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:38:43.295069 | orchestrator | Thursday 16 April 2026 08:38:20 +0000 (0:00:01.117) 0:52:27.389 ******** 2026-04-16 08:38:43.295080 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:38:43.295090 | orchestrator | 2026-04-16 08:38:43.295098 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:38:43.295124 | orchestrator | Thursday 16 April 2026 08:38:21 +0000 (0:00:01.125) 0:52:28.514 ******** 2026-04-16 08:38:43.295133 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-16 08:38:43.295143 | orchestrator | 2026-04-16 08:38:43.295152 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:38:43.295161 | orchestrator | Thursday 16 April 2026 08:38:22 +0000 (0:00:01.117) 0:52:29.632 ******** 2026-04-16 08:38:43.295170 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-16 08:38:43.295179 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-16 08:38:43.295188 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-16 08:38:43.295197 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-16 08:38:43.295205 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-16 08:38:43.295214 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-16 08:38:43.295222 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-16 08:38:43.295238 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:38:43.295247 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:38:43.295256 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:38:43.295265 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:38:43.295274 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:38:43.295282 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:38:43.295291 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:38:43.295300 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-16 08:38:43.295308 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-16 08:38:43.295317 | orchestrator | 2026-04-16 08:38:43.295326 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:38:43.295334 | orchestrator | Thursday 16 April 2026 08:38:29 +0000 (0:00:06.640) 0:52:36.273 ******** 2026-04-16 08:38:43.295343 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-16 08:38:43.295352 | orchestrator | 2026-04-16 08:38:43.295361 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:38:43.295369 | orchestrator | Thursday 16 April 2026 08:38:30 +0000 (0:00:01.113) 0:52:37.387 ******** 2026-04-16 08:38:43.295378 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:38:43.295388 | orchestrator | 2026-04-16 08:38:43.295397 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:38:43.295405 | orchestrator | Thursday 16 April 2026 08:38:32 +0000 (0:00:01.496) 0:52:38.883 ******** 2026-04-16 08:38:43.295414 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:38:43.295423 | orchestrator | 2026-04-16 08:38:43.295431 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:38:43.295440 | orchestrator | Thursday 16 April 2026 08:38:34 +0000 (0:00:01.989) 0:52:40.873 ******** 2026-04-16 08:38:43.295448 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295457 | orchestrator | 2026-04-16 08:38:43.295466 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:38:43.295474 | orchestrator | Thursday 16 April 2026 08:38:35 +0000 (0:00:01.133) 0:52:42.006 ******** 2026-04-16 08:38:43.295483 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295492 | orchestrator | 2026-04-16 08:38:43.295500 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:38:43.295509 | orchestrator | Thursday 16 April 2026 08:38:36 +0000 (0:00:01.109) 0:52:43.115 ******** 2026-04-16 08:38:43.295517 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295526 | orchestrator | 2026-04-16 08:38:43.295535 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:38:43.295543 | orchestrator | Thursday 16 April 2026 08:38:37 +0000 (0:00:01.133) 0:52:44.249 ******** 2026-04-16 08:38:43.295552 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295561 | orchestrator | 2026-04-16 08:38:43.295569 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:38:43.295582 | orchestrator | Thursday 16 April 2026 08:38:38 +0000 (0:00:01.138) 0:52:45.387 ******** 2026-04-16 08:38:43.295591 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295600 | orchestrator | 2026-04-16 08:38:43.295609 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:38:43.295617 | orchestrator | Thursday 16 April 2026 08:38:39 +0000 (0:00:01.105) 0:52:46.492 ******** 2026-04-16 08:38:43.295626 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295635 | orchestrator | 2026-04-16 08:38:43.295644 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:38:43.295658 | orchestrator | Thursday 16 April 2026 08:38:40 +0000 (0:00:01.134) 0:52:47.627 ******** 2026-04-16 08:38:43.295667 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295675 | orchestrator | 2026-04-16 08:38:43.295684 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:38:43.295693 | orchestrator | Thursday 16 April 2026 08:38:42 +0000 (0:00:01.149) 0:52:48.777 ******** 2026-04-16 08:38:43.295701 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295710 | orchestrator | 2026-04-16 08:38:43.295718 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:38:43.295747 | orchestrator | Thursday 16 April 2026 08:38:43 +0000 (0:00:01.122) 0:52:49.899 ******** 2026-04-16 08:38:43.295756 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:38:43.295765 | orchestrator | 2026-04-16 08:38:43.295780 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:39:39.246640 | orchestrator | Thursday 16 April 2026 08:38:44 +0000 (0:00:01.112) 0:52:51.012 ******** 2026-04-16 08:39:39.246747 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.246825 | orchestrator | 2026-04-16 08:39:39.246837 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:39:39.246847 | orchestrator | Thursday 16 April 2026 08:38:45 +0000 (0:00:01.134) 0:52:52.147 ******** 2026-04-16 08:39:39.246856 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.246865 | orchestrator | 2026-04-16 08:39:39.246874 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:39:39.246883 | orchestrator | Thursday 16 April 2026 08:38:46 +0000 (0:00:01.119) 0:52:53.266 ******** 2026-04-16 08:39:39.246892 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:39:39.246901 | orchestrator | 2026-04-16 08:39:39.246910 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:39:39.246919 | orchestrator | Thursday 16 April 2026 08:38:51 +0000 (0:00:04.779) 0:52:58.045 ******** 2026-04-16 08:39:39.246929 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:39:39.246939 | orchestrator | 2026-04-16 08:39:39.246948 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:39:39.246956 | orchestrator | Thursday 16 April 2026 08:38:52 +0000 (0:00:01.146) 0:52:59.192 ******** 2026-04-16 08:39:39.246968 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-16 08:39:39.246982 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-16 08:39:39.246992 | orchestrator | 2026-04-16 08:39:39.247001 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:39:39.247010 | orchestrator | Thursday 16 April 2026 08:38:57 +0000 (0:00:04.988) 0:53:04.180 ******** 2026-04-16 08:39:39.247019 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247027 | orchestrator | 2026-04-16 08:39:39.247036 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:39:39.247045 | orchestrator | Thursday 16 April 2026 08:38:58 +0000 (0:00:01.170) 0:53:05.350 ******** 2026-04-16 08:39:39.247054 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247063 | orchestrator | 2026-04-16 08:39:39.247072 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:39:39.247104 | orchestrator | Thursday 16 April 2026 08:38:59 +0000 (0:00:01.184) 0:53:06.534 ******** 2026-04-16 08:39:39.247113 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247122 | orchestrator | 2026-04-16 08:39:39.247131 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:39:39.247140 | orchestrator | Thursday 16 April 2026 08:39:00 +0000 (0:00:01.209) 0:53:07.744 ******** 2026-04-16 08:39:39.247148 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247157 | orchestrator | 2026-04-16 08:39:39.247166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:39:39.247174 | orchestrator | Thursday 16 April 2026 08:39:02 +0000 (0:00:01.130) 0:53:08.875 ******** 2026-04-16 08:39:39.247183 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247191 | orchestrator | 2026-04-16 08:39:39.247200 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:39:39.247208 | orchestrator | Thursday 16 April 2026 08:39:03 +0000 (0:00:01.122) 0:53:09.998 ******** 2026-04-16 08:39:39.247217 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.247226 | orchestrator | 2026-04-16 08:39:39.247235 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:39:39.247256 | orchestrator | Thursday 16 April 2026 08:39:04 +0000 (0:00:01.209) 0:53:11.207 ******** 2026-04-16 08:39:39.247265 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:39:39.247274 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:39:39.247283 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:39:39.247292 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247301 | orchestrator | 2026-04-16 08:39:39.247309 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:39:39.247318 | orchestrator | Thursday 16 April 2026 08:39:05 +0000 (0:00:01.387) 0:53:12.595 ******** 2026-04-16 08:39:39.247327 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:39:39.247335 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:39:39.247344 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:39:39.247353 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247361 | orchestrator | 2026-04-16 08:39:39.247370 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:39:39.247379 | orchestrator | Thursday 16 April 2026 08:39:07 +0000 (0:00:01.367) 0:53:13.962 ******** 2026-04-16 08:39:39.247387 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:39:39.247396 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:39:39.247404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:39:39.247427 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247436 | orchestrator | 2026-04-16 08:39:39.247445 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:39:39.247453 | orchestrator | Thursday 16 April 2026 08:39:08 +0000 (0:00:01.694) 0:53:15.657 ******** 2026-04-16 08:39:39.247462 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.247471 | orchestrator | 2026-04-16 08:39:39.247479 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:39:39.247488 | orchestrator | Thursday 16 April 2026 08:39:10 +0000 (0:00:01.135) 0:53:16.792 ******** 2026-04-16 08:39:39.247497 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 08:39:39.247505 | orchestrator | 2026-04-16 08:39:39.247514 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:39:39.247522 | orchestrator | Thursday 16 April 2026 08:39:11 +0000 (0:00:01.787) 0:53:18.579 ******** 2026-04-16 08:39:39.247531 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.247540 | orchestrator | 2026-04-16 08:39:39.247548 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-16 08:39:39.247564 | orchestrator | Thursday 16 April 2026 08:39:13 +0000 (0:00:01.741) 0:53:20.321 ******** 2026-04-16 08:39:39.247573 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247582 | orchestrator | 2026-04-16 08:39:39.247590 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-16 08:39:39.247599 | orchestrator | Thursday 16 April 2026 08:39:14 +0000 (0:00:01.098) 0:53:21.419 ******** 2026-04-16 08:39:39.247608 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5 2026-04-16 08:39:39.247616 | orchestrator | 2026-04-16 08:39:39.247625 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-16 08:39:39.247633 | orchestrator | Thursday 16 April 2026 08:39:16 +0000 (0:00:01.479) 0:53:22.899 ******** 2026-04-16 08:39:39.247642 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-16 08:39:39.247650 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-04-16 08:39:39.247659 | orchestrator | 2026-04-16 08:39:39.247667 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-16 08:39:39.247676 | orchestrator | Thursday 16 April 2026 08:39:18 +0000 (0:00:01.863) 0:53:24.763 ******** 2026-04-16 08:39:39.247684 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:39:39.247693 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 08:39:39.247702 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:39:39.247710 | orchestrator | 2026-04-16 08:39:39.247719 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:39:39.247727 | orchestrator | Thursday 16 April 2026 08:39:21 +0000 (0:00:03.274) 0:53:28.038 ******** 2026-04-16 08:39:39.247736 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 08:39:39.247745 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 08:39:39.247774 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.247788 | orchestrator | 2026-04-16 08:39:39.247803 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-16 08:39:39.247817 | orchestrator | Thursday 16 April 2026 08:39:23 +0000 (0:00:01.989) 0:53:30.028 ******** 2026-04-16 08:39:39.247833 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.247847 | orchestrator | 2026-04-16 08:39:39.247862 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-16 08:39:39.247875 | orchestrator | Thursday 16 April 2026 08:39:24 +0000 (0:00:01.478) 0:53:31.506 ******** 2026-04-16 08:39:39.247886 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:39:39.247897 | orchestrator | 2026-04-16 08:39:39.247908 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-16 08:39:39.247918 | orchestrator | Thursday 16 April 2026 08:39:25 +0000 (0:00:01.143) 0:53:32.650 ******** 2026-04-16 08:39:39.247929 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5 2026-04-16 08:39:39.247940 | orchestrator | 2026-04-16 08:39:39.247951 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-16 08:39:39.247961 | orchestrator | Thursday 16 April 2026 08:39:27 +0000 (0:00:01.589) 0:53:34.239 ******** 2026-04-16 08:39:39.247972 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5 2026-04-16 08:39:39.247982 | orchestrator | 2026-04-16 08:39:39.247993 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-16 08:39:39.248010 | orchestrator | Thursday 16 April 2026 08:39:28 +0000 (0:00:01.489) 0:53:35.729 ******** 2026-04-16 08:39:39.248021 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.248032 | orchestrator | 2026-04-16 08:39:39.248043 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-16 08:39:39.248053 | orchestrator | Thursday 16 April 2026 08:39:30 +0000 (0:00:02.025) 0:53:37.755 ******** 2026-04-16 08:39:39.248064 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.248075 | orchestrator | 2026-04-16 08:39:39.248085 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-16 08:39:39.248104 | orchestrator | Thursday 16 April 2026 08:39:32 +0000 (0:00:01.980) 0:53:39.735 ******** 2026-04-16 08:39:39.248115 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.248125 | orchestrator | 2026-04-16 08:39:39.248136 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-16 08:39:39.248147 | orchestrator | Thursday 16 April 2026 08:39:35 +0000 (0:00:02.244) 0:53:41.979 ******** 2026-04-16 08:39:39.248158 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.248168 | orchestrator | 2026-04-16 08:39:39.248179 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-16 08:39:39.248190 | orchestrator | Thursday 16 April 2026 08:39:37 +0000 (0:00:02.329) 0:53:44.309 ******** 2026-04-16 08:39:39.248201 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:39:39.248211 | orchestrator | 2026-04-16 08:39:39.248222 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-04-16 08:39:39.248233 | orchestrator | Thursday 16 April 2026 08:39:39 +0000 (0:00:01.641) 0:53:45.950 ******** 2026-04-16 08:39:39.248253 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:40:09.844750 | orchestrator | 2026-04-16 08:40:09.844901 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-04-16 08:40:09.844919 | orchestrator | Thursday 16 April 2026 08:39:40 +0000 (0:00:01.110) 0:53:47.061 ******** 2026-04-16 08:40:09.844931 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:40:09.844944 | orchestrator | 2026-04-16 08:40:09.844956 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-04-16 08:40:09.844967 | orchestrator | 2026-04-16 08:40:09.844978 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:40:09.844989 | orchestrator | Thursday 16 April 2026 08:39:47 +0000 (0:00:06.845) 0:53:53.906 ******** 2026-04-16 08:40:09.845001 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4 2026-04-16 08:40:09.845013 | orchestrator | 2026-04-16 08:40:09.845024 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:40:09.845035 | orchestrator | Thursday 16 April 2026 08:39:48 +0000 (0:00:01.451) 0:53:55.358 ******** 2026-04-16 08:40:09.845047 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845058 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845069 | orchestrator | 2026-04-16 08:40:09.845081 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:40:09.845092 | orchestrator | Thursday 16 April 2026 08:39:50 +0000 (0:00:01.569) 0:53:56.927 ******** 2026-04-16 08:40:09.845103 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845114 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845125 | orchestrator | 2026-04-16 08:40:09.845136 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:40:09.845148 | orchestrator | Thursday 16 April 2026 08:39:51 +0000 (0:00:01.202) 0:53:58.130 ******** 2026-04-16 08:40:09.845159 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845170 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845181 | orchestrator | 2026-04-16 08:40:09.845192 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:40:09.845204 | orchestrator | Thursday 16 April 2026 08:39:52 +0000 (0:00:01.479) 0:53:59.610 ******** 2026-04-16 08:40:09.845215 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845226 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845242 | orchestrator | 2026-04-16 08:40:09.845262 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:40:09.845282 | orchestrator | Thursday 16 April 2026 08:39:54 +0000 (0:00:01.205) 0:54:00.815 ******** 2026-04-16 08:40:09.845300 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845319 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845338 | orchestrator | 2026-04-16 08:40:09.845356 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:40:09.845374 | orchestrator | Thursday 16 April 2026 08:39:55 +0000 (0:00:01.174) 0:54:01.990 ******** 2026-04-16 08:40:09.845392 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845446 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845468 | orchestrator | 2026-04-16 08:40:09.845489 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:40:09.845508 | orchestrator | Thursday 16 April 2026 08:39:56 +0000 (0:00:01.235) 0:54:03.226 ******** 2026-04-16 08:40:09.845519 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:09.845531 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:09.845542 | orchestrator | 2026-04-16 08:40:09.845554 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:40:09.845565 | orchestrator | Thursday 16 April 2026 08:39:57 +0000 (0:00:01.209) 0:54:04.435 ******** 2026-04-16 08:40:09.845576 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845587 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845598 | orchestrator | 2026-04-16 08:40:09.845608 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:40:09.845619 | orchestrator | Thursday 16 April 2026 08:39:58 +0000 (0:00:01.250) 0:54:05.686 ******** 2026-04-16 08:40:09.845630 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:40:09.845641 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:40:09.845652 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:40:09.845663 | orchestrator | 2026-04-16 08:40:09.845673 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:40:09.845684 | orchestrator | Thursday 16 April 2026 08:40:00 +0000 (0:00:01.798) 0:54:07.484 ******** 2026-04-16 08:40:09.845695 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:09.845721 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:09.845732 | orchestrator | 2026-04-16 08:40:09.845743 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:40:09.845754 | orchestrator | Thursday 16 April 2026 08:40:02 +0000 (0:00:01.368) 0:54:08.853 ******** 2026-04-16 08:40:09.845765 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:40:09.845812 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:40:09.845823 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:40:09.845834 | orchestrator | 2026-04-16 08:40:09.845845 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:40:09.845855 | orchestrator | Thursday 16 April 2026 08:40:05 +0000 (0:00:03.134) 0:54:11.988 ******** 2026-04-16 08:40:09.845866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 08:40:09.845878 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 08:40:09.845888 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 08:40:09.845899 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:09.845910 | orchestrator | 2026-04-16 08:40:09.845921 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:40:09.845932 | orchestrator | Thursday 16 April 2026 08:40:06 +0000 (0:00:01.426) 0:54:13.415 ******** 2026-04-16 08:40:09.845964 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:40:09.845979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:40:09.845991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:40:09.846012 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:09.846106 | orchestrator | 2026-04-16 08:40:09.846117 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:40:09.846129 | orchestrator | Thursday 16 April 2026 08:40:08 +0000 (0:00:01.923) 0:54:15.339 ******** 2026-04-16 08:40:09.846142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:09.846158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:09.846179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:09.846190 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:09.846201 | orchestrator | 2026-04-16 08:40:09.846212 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:40:09.846223 | orchestrator | Thursday 16 April 2026 08:40:09 +0000 (0:00:01.127) 0:54:16.466 ******** 2026-04-16 08:40:09.846243 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:40:02.638795', 'end': '2026-04-16 08:40:02.687193', 'delta': '0:00:00.048398', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:40:09.846258 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:40:03.484371', 'end': '2026-04-16 08:40:03.537719', 'delta': '0:00:00.053348', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:40:09.846280 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:40:04.092940', 'end': '2026-04-16 08:40:04.132637', 'delta': '0:00:00.039697', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:40:29.546504 | orchestrator | 2026-04-16 08:40:29.546606 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:40:29.546622 | orchestrator | Thursday 16 April 2026 08:40:10 +0000 (0:00:01.208) 0:54:17.676 ******** 2026-04-16 08:40:29.546631 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:29.546641 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:29.546649 | orchestrator | 2026-04-16 08:40:29.546658 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:40:29.546666 | orchestrator | Thursday 16 April 2026 08:40:12 +0000 (0:00:01.386) 0:54:19.062 ******** 2026-04-16 08:40:29.546675 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.546684 | orchestrator | 2026-04-16 08:40:29.546692 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:40:29.546700 | orchestrator | Thursday 16 April 2026 08:40:13 +0000 (0:00:01.224) 0:54:20.287 ******** 2026-04-16 08:40:29.546708 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:29.546716 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:29.546724 | orchestrator | 2026-04-16 08:40:29.546732 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:40:29.546740 | orchestrator | Thursday 16 April 2026 08:40:14 +0000 (0:00:01.245) 0:54:21.532 ******** 2026-04-16 08:40:29.546748 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:40:29.546757 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:40:29.546764 | orchestrator | 2026-04-16 08:40:29.546772 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:40:29.546832 | orchestrator | Thursday 16 April 2026 08:40:16 +0000 (0:00:02.093) 0:54:23.626 ******** 2026-04-16 08:40:29.546848 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:29.546860 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:29.546875 | orchestrator | 2026-04-16 08:40:29.546883 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:40:29.546892 | orchestrator | Thursday 16 April 2026 08:40:18 +0000 (0:00:01.203) 0:54:24.830 ******** 2026-04-16 08:40:29.546900 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.546907 | orchestrator | 2026-04-16 08:40:29.546915 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:40:29.546923 | orchestrator | Thursday 16 April 2026 08:40:19 +0000 (0:00:01.094) 0:54:25.925 ******** 2026-04-16 08:40:29.546931 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.546939 | orchestrator | 2026-04-16 08:40:29.546947 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:40:29.546955 | orchestrator | Thursday 16 April 2026 08:40:20 +0000 (0:00:01.185) 0:54:27.110 ******** 2026-04-16 08:40:29.546963 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.546971 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:29.546979 | orchestrator | 2026-04-16 08:40:29.546987 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:40:29.546995 | orchestrator | Thursday 16 April 2026 08:40:21 +0000 (0:00:01.213) 0:54:28.323 ******** 2026-04-16 08:40:29.547002 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.547010 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:29.547018 | orchestrator | 2026-04-16 08:40:29.547026 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:40:29.547034 | orchestrator | Thursday 16 April 2026 08:40:22 +0000 (0:00:01.224) 0:54:29.548 ******** 2026-04-16 08:40:29.547042 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:29.547050 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:29.547059 | orchestrator | 2026-04-16 08:40:29.547069 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:40:29.547078 | orchestrator | Thursday 16 April 2026 08:40:24 +0000 (0:00:01.245) 0:54:30.793 ******** 2026-04-16 08:40:29.547111 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.547121 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:29.547130 | orchestrator | 2026-04-16 08:40:29.547151 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:40:29.547161 | orchestrator | Thursday 16 April 2026 08:40:25 +0000 (0:00:01.225) 0:54:32.019 ******** 2026-04-16 08:40:29.547170 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:29.547180 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:29.547189 | orchestrator | 2026-04-16 08:40:29.547197 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:40:29.547206 | orchestrator | Thursday 16 April 2026 08:40:26 +0000 (0:00:01.279) 0:54:33.298 ******** 2026-04-16 08:40:29.547216 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.547230 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:29.547243 | orchestrator | 2026-04-16 08:40:29.547256 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:40:29.547270 | orchestrator | Thursday 16 April 2026 08:40:27 +0000 (0:00:01.210) 0:54:34.509 ******** 2026-04-16 08:40:29.547284 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:29.547296 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:29.547310 | orchestrator | 2026-04-16 08:40:29.547324 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:40:29.547339 | orchestrator | Thursday 16 April 2026 08:40:29 +0000 (0:00:01.310) 0:54:35.820 ******** 2026-04-16 08:40:29.547355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.547392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}})  2026-04-16 08:40:29.547410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:40:29.547425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}})  2026-04-16 08:40:29.547450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.547470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.547483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:40:29.547499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.547522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}})  2026-04-16 08:40:29.657716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}})  2026-04-16 08:40:29.657753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:40:29.657829 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657863 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:29.657881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.657898 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}})  2026-04-16 08:40:29.657910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:40:29.657929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}})  2026-04-16 08:40:29.781483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:40:29.781645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}})  2026-04-16 08:40:29.781727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}})  2026-04-16 08:40:29.781741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:40:29.781836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:40:29.781868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:40:31.130180 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:31.130305 | orchestrator | 2026-04-16 08:40:31.130327 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:40:31.130342 | orchestrator | Thursday 16 April 2026 08:40:30 +0000 (0:00:01.847) 0:54:37.667 ******** 2026-04-16 08:40:31.130359 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130376 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130498 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.130610 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187110 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187276 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187284 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187297 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.187309 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.282704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.282879 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.282940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.282961 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.283009 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:31.283031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.283071 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.283098 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.283118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.283140 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:31.283188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:58.586831 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:58.586956 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:58.587001 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:40:58.587016 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587030 | orchestrator | 2026-04-16 08:40:58.587043 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:40:58.587055 | orchestrator | Thursday 16 April 2026 08:40:32 +0000 (0:00:01.490) 0:54:39.158 ******** 2026-04-16 08:40:58.587067 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:58.587079 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:58.587089 | orchestrator | 2026-04-16 08:40:58.587101 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:40:58.587112 | orchestrator | Thursday 16 April 2026 08:40:34 +0000 (0:00:01.624) 0:54:40.782 ******** 2026-04-16 08:40:58.587123 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:58.587134 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:58.587144 | orchestrator | 2026-04-16 08:40:58.587155 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:40:58.587166 | orchestrator | Thursday 16 April 2026 08:40:35 +0000 (0:00:01.215) 0:54:41.997 ******** 2026-04-16 08:40:58.587177 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:58.587188 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:58.587199 | orchestrator | 2026-04-16 08:40:58.587209 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:40:58.587221 | orchestrator | Thursday 16 April 2026 08:40:36 +0000 (0:00:01.601) 0:54:43.599 ******** 2026-04-16 08:40:58.587232 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587243 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587254 | orchestrator | 2026-04-16 08:40:58.587265 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:40:58.587276 | orchestrator | Thursday 16 April 2026 08:40:38 +0000 (0:00:01.215) 0:54:44.815 ******** 2026-04-16 08:40:58.587287 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587297 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587308 | orchestrator | 2026-04-16 08:40:58.587320 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:40:58.587333 | orchestrator | Thursday 16 April 2026 08:40:39 +0000 (0:00:01.681) 0:54:46.496 ******** 2026-04-16 08:40:58.587346 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587358 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587372 | orchestrator | 2026-04-16 08:40:58.587385 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:40:58.587398 | orchestrator | Thursday 16 April 2026 08:40:40 +0000 (0:00:01.231) 0:54:47.729 ******** 2026-04-16 08:40:58.587410 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-16 08:40:58.587423 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-16 08:40:58.587435 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-16 08:40:58.587447 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-16 08:40:58.587459 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-16 08:40:58.587471 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-16 08:40:58.587484 | orchestrator | 2026-04-16 08:40:58.587496 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:40:58.587522 | orchestrator | Thursday 16 April 2026 08:40:42 +0000 (0:00:01.801) 0:54:49.530 ******** 2026-04-16 08:40:58.587562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 08:40:58.587577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 08:40:58.587589 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 08:40:58.587602 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587614 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 08:40:58.587626 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 08:40:58.587639 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 08:40:58.587651 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587664 | orchestrator | 2026-04-16 08:40:58.587677 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:40:58.587689 | orchestrator | Thursday 16 April 2026 08:40:44 +0000 (0:00:01.259) 0:54:50.789 ******** 2026-04-16 08:40:58.587700 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4 2026-04-16 08:40:58.587712 | orchestrator | 2026-04-16 08:40:58.587723 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:40:58.587734 | orchestrator | Thursday 16 April 2026 08:40:45 +0000 (0:00:01.244) 0:54:52.034 ******** 2026-04-16 08:40:58.587745 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587756 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587767 | orchestrator | 2026-04-16 08:40:58.587778 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:40:58.587789 | orchestrator | Thursday 16 April 2026 08:40:46 +0000 (0:00:01.178) 0:54:53.213 ******** 2026-04-16 08:40:58.587823 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587835 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587845 | orchestrator | 2026-04-16 08:40:58.587856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:40:58.587868 | orchestrator | Thursday 16 April 2026 08:40:47 +0000 (0:00:01.493) 0:54:54.707 ******** 2026-04-16 08:40:58.587878 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.587889 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:40:58.587900 | orchestrator | 2026-04-16 08:40:58.587911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:40:58.587922 | orchestrator | Thursday 16 April 2026 08:40:49 +0000 (0:00:01.224) 0:54:55.931 ******** 2026-04-16 08:40:58.587933 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:58.587944 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:58.587955 | orchestrator | 2026-04-16 08:40:58.587966 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:40:58.587977 | orchestrator | Thursday 16 April 2026 08:40:50 +0000 (0:00:01.281) 0:54:57.213 ******** 2026-04-16 08:40:58.587987 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:40:58.587998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:40:58.588009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:40:58.588020 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.588031 | orchestrator | 2026-04-16 08:40:58.588042 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:40:58.588052 | orchestrator | Thursday 16 April 2026 08:40:51 +0000 (0:00:01.364) 0:54:58.577 ******** 2026-04-16 08:40:58.588063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:40:58.588074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:40:58.588085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:40:58.588096 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.588107 | orchestrator | 2026-04-16 08:40:58.588117 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:40:58.588136 | orchestrator | Thursday 16 April 2026 08:40:53 +0000 (0:00:01.353) 0:54:59.931 ******** 2026-04-16 08:40:58.588147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:40:58.588158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:40:58.588169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:40:58.588180 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:40:58.588190 | orchestrator | 2026-04-16 08:40:58.588201 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:40:58.588212 | orchestrator | Thursday 16 April 2026 08:40:54 +0000 (0:00:01.352) 0:55:01.284 ******** 2026-04-16 08:40:58.588223 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:40:58.588234 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:40:58.588245 | orchestrator | 2026-04-16 08:40:58.588256 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:40:58.588267 | orchestrator | Thursday 16 April 2026 08:40:55 +0000 (0:00:01.255) 0:55:02.540 ******** 2026-04-16 08:40:58.588278 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:40:58.588289 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 08:40:58.588300 | orchestrator | 2026-04-16 08:40:58.588310 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:40:58.588321 | orchestrator | Thursday 16 April 2026 08:40:57 +0000 (0:00:01.775) 0:55:04.315 ******** 2026-04-16 08:40:58.588332 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:40:58.588343 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:40:58.588354 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:40:58.588365 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 08:40:58.588376 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:40:58.588392 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:40:58.588410 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:41:41.469781 | orchestrator | 2026-04-16 08:41:41.469964 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:41:41.470000 | orchestrator | Thursday 16 April 2026 08:40:59 +0000 (0:00:02.110) 0:55:06.426 ******** 2026-04-16 08:41:41.470098 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:41:41.470120 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:41:41.470139 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:41:41.470160 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 08:41:41.470179 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:41:41.470193 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:41:41.470204 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:41:41.470215 | orchestrator | 2026-04-16 08:41:41.470227 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-04-16 08:41:41.470238 | orchestrator | Thursday 16 April 2026 08:41:02 +0000 (0:00:02.532) 0:55:08.958 ******** 2026-04-16 08:41:41.470249 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.470261 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.470272 | orchestrator | 2026-04-16 08:41:41.470283 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:41:41.470293 | orchestrator | Thursday 16 April 2026 08:41:03 +0000 (0:00:01.237) 0:55:10.196 ******** 2026-04-16 08:41:41.470305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4 2026-04-16 08:41:41.470355 | orchestrator | 2026-04-16 08:41:41.470369 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:41:41.470381 | orchestrator | Thursday 16 April 2026 08:41:04 +0000 (0:00:01.228) 0:55:11.424 ******** 2026-04-16 08:41:41.470394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4 2026-04-16 08:41:41.470407 | orchestrator | 2026-04-16 08:41:41.470418 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:41:41.470430 | orchestrator | Thursday 16 April 2026 08:41:05 +0000 (0:00:01.324) 0:55:12.749 ******** 2026-04-16 08:41:41.470443 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.470456 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.470469 | orchestrator | 2026-04-16 08:41:41.470481 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:41:41.470494 | orchestrator | Thursday 16 April 2026 08:41:07 +0000 (0:00:01.180) 0:55:13.929 ******** 2026-04-16 08:41:41.470506 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.470518 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.470530 | orchestrator | 2026-04-16 08:41:41.470543 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:41:41.470555 | orchestrator | Thursday 16 April 2026 08:41:08 +0000 (0:00:01.620) 0:55:15.550 ******** 2026-04-16 08:41:41.470567 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.470579 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.470591 | orchestrator | 2026-04-16 08:41:41.470603 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:41:41.470615 | orchestrator | Thursday 16 April 2026 08:41:10 +0000 (0:00:01.745) 0:55:17.295 ******** 2026-04-16 08:41:41.470640 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.470654 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.470676 | orchestrator | 2026-04-16 08:41:41.470687 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:41:41.470698 | orchestrator | Thursday 16 April 2026 08:41:12 +0000 (0:00:01.621) 0:55:18.917 ******** 2026-04-16 08:41:41.470709 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.470720 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.470731 | orchestrator | 2026-04-16 08:41:41.470741 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:41:41.470752 | orchestrator | Thursday 16 April 2026 08:41:13 +0000 (0:00:01.235) 0:55:20.153 ******** 2026-04-16 08:41:41.470763 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.470773 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.470784 | orchestrator | 2026-04-16 08:41:41.470795 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:41:41.470805 | orchestrator | Thursday 16 April 2026 08:41:14 +0000 (0:00:01.228) 0:55:21.381 ******** 2026-04-16 08:41:41.470841 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.470854 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.470864 | orchestrator | 2026-04-16 08:41:41.470875 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:41:41.470886 | orchestrator | Thursday 16 April 2026 08:41:15 +0000 (0:00:01.195) 0:55:22.576 ******** 2026-04-16 08:41:41.470896 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.470907 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.470918 | orchestrator | 2026-04-16 08:41:41.470928 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:41:41.470939 | orchestrator | Thursday 16 April 2026 08:41:17 +0000 (0:00:01.628) 0:55:24.205 ******** 2026-04-16 08:41:41.470950 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.470961 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.470971 | orchestrator | 2026-04-16 08:41:41.470982 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:41:41.470993 | orchestrator | Thursday 16 April 2026 08:41:19 +0000 (0:00:01.673) 0:55:25.879 ******** 2026-04-16 08:41:41.471003 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471023 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471033 | orchestrator | 2026-04-16 08:41:41.471058 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:41:41.471069 | orchestrator | Thursday 16 April 2026 08:41:20 +0000 (0:00:01.204) 0:55:27.084 ******** 2026-04-16 08:41:41.471080 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471110 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471121 | orchestrator | 2026-04-16 08:41:41.471132 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:41:41.471143 | orchestrator | Thursday 16 April 2026 08:41:21 +0000 (0:00:01.225) 0:55:28.309 ******** 2026-04-16 08:41:41.471154 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.471165 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.471175 | orchestrator | 2026-04-16 08:41:41.471186 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:41:41.471197 | orchestrator | Thursday 16 April 2026 08:41:22 +0000 (0:00:01.199) 0:55:29.509 ******** 2026-04-16 08:41:41.471208 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.471218 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.471229 | orchestrator | 2026-04-16 08:41:41.471240 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:41:41.471250 | orchestrator | Thursday 16 April 2026 08:41:23 +0000 (0:00:01.236) 0:55:30.746 ******** 2026-04-16 08:41:41.471261 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.471272 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.471282 | orchestrator | 2026-04-16 08:41:41.471293 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:41:41.471304 | orchestrator | Thursday 16 April 2026 08:41:25 +0000 (0:00:01.573) 0:55:32.319 ******** 2026-04-16 08:41:41.471315 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471326 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471336 | orchestrator | 2026-04-16 08:41:41.471347 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:41:41.471358 | orchestrator | Thursday 16 April 2026 08:41:26 +0000 (0:00:01.170) 0:55:33.490 ******** 2026-04-16 08:41:41.471368 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471379 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471390 | orchestrator | 2026-04-16 08:41:41.471401 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:41:41.471411 | orchestrator | Thursday 16 April 2026 08:41:27 +0000 (0:00:01.211) 0:55:34.702 ******** 2026-04-16 08:41:41.471422 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471433 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471443 | orchestrator | 2026-04-16 08:41:41.471454 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:41:41.471465 | orchestrator | Thursday 16 April 2026 08:41:29 +0000 (0:00:01.243) 0:55:35.945 ******** 2026-04-16 08:41:41.471475 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.471486 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.471497 | orchestrator | 2026-04-16 08:41:41.471508 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:41:41.471519 | orchestrator | Thursday 16 April 2026 08:41:30 +0000 (0:00:01.219) 0:55:37.165 ******** 2026-04-16 08:41:41.471529 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:41:41.471540 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:41:41.471551 | orchestrator | 2026-04-16 08:41:41.471562 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:41:41.471572 | orchestrator | Thursday 16 April 2026 08:41:31 +0000 (0:00:01.356) 0:55:38.522 ******** 2026-04-16 08:41:41.471583 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471594 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471605 | orchestrator | 2026-04-16 08:41:41.471615 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:41:41.471626 | orchestrator | Thursday 16 April 2026 08:41:32 +0000 (0:00:01.183) 0:55:39.706 ******** 2026-04-16 08:41:41.471644 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471655 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471666 | orchestrator | 2026-04-16 08:41:41.471676 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:41:41.471687 | orchestrator | Thursday 16 April 2026 08:41:34 +0000 (0:00:01.204) 0:55:40.910 ******** 2026-04-16 08:41:41.471698 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471709 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471719 | orchestrator | 2026-04-16 08:41:41.471730 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:41:41.471741 | orchestrator | Thursday 16 April 2026 08:41:35 +0000 (0:00:01.168) 0:55:42.079 ******** 2026-04-16 08:41:41.471752 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471762 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471773 | orchestrator | 2026-04-16 08:41:41.471784 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:41:41.471795 | orchestrator | Thursday 16 April 2026 08:41:36 +0000 (0:00:01.184) 0:55:43.264 ******** 2026-04-16 08:41:41.471805 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471854 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471866 | orchestrator | 2026-04-16 08:41:41.471877 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:41:41.471888 | orchestrator | Thursday 16 April 2026 08:41:37 +0000 (0:00:01.174) 0:55:44.438 ******** 2026-04-16 08:41:41.471899 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471909 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471920 | orchestrator | 2026-04-16 08:41:41.471931 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:41:41.471941 | orchestrator | Thursday 16 April 2026 08:41:38 +0000 (0:00:01.194) 0:55:45.633 ******** 2026-04-16 08:41:41.471952 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.471962 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.471973 | orchestrator | 2026-04-16 08:41:41.471984 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:41:41.471995 | orchestrator | Thursday 16 April 2026 08:41:40 +0000 (0:00:01.188) 0:55:46.821 ******** 2026-04-16 08:41:41.472005 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.472016 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.472027 | orchestrator | 2026-04-16 08:41:41.472038 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:41:41.472054 | orchestrator | Thursday 16 April 2026 08:41:41 +0000 (0:00:01.169) 0:55:47.990 ******** 2026-04-16 08:41:41.472065 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:41:41.472076 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:41:41.472087 | orchestrator | 2026-04-16 08:41:41.472105 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:42:25.285191 | orchestrator | Thursday 16 April 2026 08:41:42 +0000 (0:00:01.206) 0:55:49.196 ******** 2026-04-16 08:42:25.285331 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.285357 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.285375 | orchestrator | 2026-04-16 08:42:25.285392 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:42:25.285408 | orchestrator | Thursday 16 April 2026 08:41:43 +0000 (0:00:01.189) 0:55:50.386 ******** 2026-04-16 08:42:25.285423 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.285439 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.285453 | orchestrator | 2026-04-16 08:42:25.285468 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:42:25.285483 | orchestrator | Thursday 16 April 2026 08:41:44 +0000 (0:00:01.208) 0:55:51.595 ******** 2026-04-16 08:42:25.285499 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.285514 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.285529 | orchestrator | 2026-04-16 08:42:25.285545 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:42:25.285592 | orchestrator | Thursday 16 April 2026 08:41:46 +0000 (0:00:01.173) 0:55:52.768 ******** 2026-04-16 08:42:25.285609 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:42:25.285626 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:42:25.285642 | orchestrator | 2026-04-16 08:42:25.285657 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:42:25.285673 | orchestrator | Thursday 16 April 2026 08:41:48 +0000 (0:00:02.379) 0:55:55.148 ******** 2026-04-16 08:42:25.285688 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:42:25.285705 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:42:25.285720 | orchestrator | 2026-04-16 08:42:25.285737 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:42:25.285753 | orchestrator | Thursday 16 April 2026 08:41:50 +0000 (0:00:02.483) 0:55:57.631 ******** 2026-04-16 08:42:25.285770 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4 2026-04-16 08:42:25.285787 | orchestrator | 2026-04-16 08:42:25.285803 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:42:25.285820 | orchestrator | Thursday 16 April 2026 08:41:52 +0000 (0:00:01.197) 0:55:58.829 ******** 2026-04-16 08:42:25.285838 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.285886 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.285904 | orchestrator | 2026-04-16 08:42:25.285920 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:42:25.285936 | orchestrator | Thursday 16 April 2026 08:41:53 +0000 (0:00:01.244) 0:56:00.074 ******** 2026-04-16 08:42:25.285952 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.285969 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.285986 | orchestrator | 2026-04-16 08:42:25.286003 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:42:25.286117 | orchestrator | Thursday 16 April 2026 08:41:54 +0000 (0:00:01.222) 0:56:01.296 ******** 2026-04-16 08:42:25.286139 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:42:25.286156 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:42:25.286166 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:42:25.286176 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:42:25.286185 | orchestrator | 2026-04-16 08:42:25.286202 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:42:25.286217 | orchestrator | Thursday 16 April 2026 08:41:56 +0000 (0:00:01.946) 0:56:03.243 ******** 2026-04-16 08:42:25.286233 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:42:25.286248 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:42:25.286264 | orchestrator | 2026-04-16 08:42:25.286281 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:42:25.286297 | orchestrator | Thursday 16 April 2026 08:41:58 +0000 (0:00:01.563) 0:56:04.807 ******** 2026-04-16 08:42:25.286314 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.286331 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.286347 | orchestrator | 2026-04-16 08:42:25.286364 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:42:25.286380 | orchestrator | Thursday 16 April 2026 08:41:59 +0000 (0:00:01.239) 0:56:06.046 ******** 2026-04-16 08:42:25.286395 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.286411 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.286426 | orchestrator | 2026-04-16 08:42:25.286443 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:42:25.286458 | orchestrator | Thursday 16 April 2026 08:42:00 +0000 (0:00:01.242) 0:56:07.288 ******** 2026-04-16 08:42:25.286475 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.286490 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.286507 | orchestrator | 2026-04-16 08:42:25.286543 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:42:25.286560 | orchestrator | Thursday 16 April 2026 08:42:01 +0000 (0:00:01.223) 0:56:08.511 ******** 2026-04-16 08:42:25.286575 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4 2026-04-16 08:42:25.286593 | orchestrator | 2026-04-16 08:42:25.286608 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:42:25.286625 | orchestrator | Thursday 16 April 2026 08:42:02 +0000 (0:00:01.232) 0:56:09.744 ******** 2026-04-16 08:42:25.286637 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:42:25.286647 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:42:25.286662 | orchestrator | 2026-04-16 08:42:25.286678 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:42:25.286712 | orchestrator | Thursday 16 April 2026 08:42:05 +0000 (0:00:02.089) 0:56:11.834 ******** 2026-04-16 08:42:25.286728 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:42:25.286772 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:42:25.286790 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:42:25.286806 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.286821 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:42:25.286837 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:42:25.286882 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:42:25.286897 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.286913 | orchestrator | 2026-04-16 08:42:25.286931 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:42:25.286947 | orchestrator | Thursday 16 April 2026 08:42:06 +0000 (0:00:01.274) 0:56:13.108 ******** 2026-04-16 08:42:25.286957 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.286967 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.286982 | orchestrator | 2026-04-16 08:42:25.286998 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:42:25.287014 | orchestrator | Thursday 16 April 2026 08:42:07 +0000 (0:00:01.201) 0:56:14.309 ******** 2026-04-16 08:42:25.287032 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287049 | orchestrator | 2026-04-16 08:42:25.287065 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:42:25.287083 | orchestrator | Thursday 16 April 2026 08:42:08 +0000 (0:00:01.123) 0:56:15.433 ******** 2026-04-16 08:42:25.287099 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287115 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287131 | orchestrator | 2026-04-16 08:42:25.287147 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:42:25.287163 | orchestrator | Thursday 16 April 2026 08:42:09 +0000 (0:00:01.201) 0:56:16.635 ******** 2026-04-16 08:42:25.287178 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287193 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287208 | orchestrator | 2026-04-16 08:42:25.287223 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:42:25.287239 | orchestrator | Thursday 16 April 2026 08:42:11 +0000 (0:00:01.214) 0:56:17.849 ******** 2026-04-16 08:42:25.287254 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287270 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287286 | orchestrator | 2026-04-16 08:42:25.287302 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:42:25.287318 | orchestrator | Thursday 16 April 2026 08:42:12 +0000 (0:00:01.214) 0:56:19.064 ******** 2026-04-16 08:42:25.287334 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:42:25.287349 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:42:25.287365 | orchestrator | 2026-04-16 08:42:25.287382 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:42:25.287414 | orchestrator | Thursday 16 April 2026 08:42:14 +0000 (0:00:02.625) 0:56:21.690 ******** 2026-04-16 08:42:25.287430 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:42:25.287447 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:42:25.287463 | orchestrator | 2026-04-16 08:42:25.287479 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:42:25.287496 | orchestrator | Thursday 16 April 2026 08:42:16 +0000 (0:00:01.273) 0:56:22.963 ******** 2026-04-16 08:42:25.287513 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4 2026-04-16 08:42:25.287531 | orchestrator | 2026-04-16 08:42:25.287548 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:42:25.287563 | orchestrator | Thursday 16 April 2026 08:42:17 +0000 (0:00:01.173) 0:56:24.136 ******** 2026-04-16 08:42:25.287578 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287595 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287611 | orchestrator | 2026-04-16 08:42:25.287628 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:42:25.287643 | orchestrator | Thursday 16 April 2026 08:42:18 +0000 (0:00:01.241) 0:56:25.378 ******** 2026-04-16 08:42:25.287660 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287677 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287692 | orchestrator | 2026-04-16 08:42:25.287709 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:42:25.287722 | orchestrator | Thursday 16 April 2026 08:42:19 +0000 (0:00:01.198) 0:56:26.577 ******** 2026-04-16 08:42:25.287732 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287741 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287751 | orchestrator | 2026-04-16 08:42:25.287760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:42:25.287770 | orchestrator | Thursday 16 April 2026 08:42:21 +0000 (0:00:01.203) 0:56:27.780 ******** 2026-04-16 08:42:25.287779 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287789 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287798 | orchestrator | 2026-04-16 08:42:25.287808 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:42:25.287817 | orchestrator | Thursday 16 April 2026 08:42:22 +0000 (0:00:01.538) 0:56:29.319 ******** 2026-04-16 08:42:25.287826 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287836 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287911 | orchestrator | 2026-04-16 08:42:25.287924 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:42:25.287933 | orchestrator | Thursday 16 April 2026 08:42:23 +0000 (0:00:01.224) 0:56:30.544 ******** 2026-04-16 08:42:25.287943 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.287953 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.287962 | orchestrator | 2026-04-16 08:42:25.287972 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:42:25.287991 | orchestrator | Thursday 16 April 2026 08:42:25 +0000 (0:00:01.219) 0:56:31.764 ******** 2026-04-16 08:42:25.288001 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:42:25.288011 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:42:25.288021 | orchestrator | 2026-04-16 08:42:25.288053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:43:03.987047 | orchestrator | Thursday 16 April 2026 08:42:26 +0000 (0:00:01.227) 0:56:32.992 ******** 2026-04-16 08:43:03.987155 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.987172 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.987183 | orchestrator | 2026-04-16 08:43:03.987195 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:43:03.987206 | orchestrator | Thursday 16 April 2026 08:42:27 +0000 (0:00:01.206) 0:56:34.198 ******** 2026-04-16 08:43:03.987216 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:03.987227 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:03.987260 | orchestrator | 2026-04-16 08:43:03.987271 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:43:03.987281 | orchestrator | Thursday 16 April 2026 08:42:28 +0000 (0:00:01.414) 0:56:35.612 ******** 2026-04-16 08:43:03.987291 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4 2026-04-16 08:43:03.987302 | orchestrator | 2026-04-16 08:43:03.987312 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:43:03.987322 | orchestrator | Thursday 16 April 2026 08:42:30 +0000 (0:00:01.208) 0:56:36.821 ******** 2026-04-16 08:43:03.987331 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-16 08:43:03.987341 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-16 08:43:03.987351 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-16 08:43:03.987361 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-16 08:43:03.987371 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-16 08:43:03.987380 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-16 08:43:03.987389 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-16 08:43:03.987399 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-16 08:43:03.987408 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-16 08:43:03.987419 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-16 08:43:03.987428 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-16 08:43:03.987437 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-16 08:43:03.987447 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-16 08:43:03.987457 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-16 08:43:03.987466 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:43:03.987476 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:43:03.987486 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:43:03.987496 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:43:03.987506 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:43:03.987516 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:43:03.987526 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:43:03.987536 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:43:03.987546 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:43:03.987557 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:43:03.987568 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:43:03.987578 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:43:03.987588 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:43:03.987598 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:43:03.987609 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-16 08:43:03.987620 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-16 08:43:03.987630 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-16 08:43:03.987640 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-16 08:43:03.987649 | orchestrator | 2026-04-16 08:43:03.987659 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:43:03.987670 | orchestrator | Thursday 16 April 2026 08:42:37 +0000 (0:00:06.958) 0:56:43.779 ******** 2026-04-16 08:43:03.987680 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4 2026-04-16 08:43:03.987689 | orchestrator | 2026-04-16 08:43:03.987699 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:43:03.987718 | orchestrator | Thursday 16 April 2026 08:42:38 +0000 (0:00:01.241) 0:56:45.020 ******** 2026-04-16 08:43:03.987730 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:43:03.987741 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:43:03.987750 | orchestrator | 2026-04-16 08:43:03.987760 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:43:03.987771 | orchestrator | Thursday 16 April 2026 08:42:39 +0000 (0:00:01.618) 0:56:46.639 ******** 2026-04-16 08:43:03.987781 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:43:03.987807 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:43:03.987818 | orchestrator | 2026-04-16 08:43:03.987828 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:43:03.987855 | orchestrator | Thursday 16 April 2026 08:42:42 +0000 (0:00:02.472) 0:56:49.112 ******** 2026-04-16 08:43:03.987867 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.987878 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.987888 | orchestrator | 2026-04-16 08:43:03.987927 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:43:03.987938 | orchestrator | Thursday 16 April 2026 08:42:43 +0000 (0:00:01.223) 0:56:50.336 ******** 2026-04-16 08:43:03.987947 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.987957 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.987967 | orchestrator | 2026-04-16 08:43:03.987978 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:43:03.987989 | orchestrator | Thursday 16 April 2026 08:42:44 +0000 (0:00:01.206) 0:56:51.542 ******** 2026-04-16 08:43:03.987998 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988008 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988018 | orchestrator | 2026-04-16 08:43:03.988029 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:43:03.988039 | orchestrator | Thursday 16 April 2026 08:42:46 +0000 (0:00:01.321) 0:56:52.863 ******** 2026-04-16 08:43:03.988050 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988059 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988069 | orchestrator | 2026-04-16 08:43:03.988079 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:43:03.988090 | orchestrator | Thursday 16 April 2026 08:42:47 +0000 (0:00:01.198) 0:56:54.061 ******** 2026-04-16 08:43:03.988100 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988111 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988121 | orchestrator | 2026-04-16 08:43:03.988131 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:43:03.988141 | orchestrator | Thursday 16 April 2026 08:42:48 +0000 (0:00:01.242) 0:56:55.304 ******** 2026-04-16 08:43:03.988151 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988161 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988170 | orchestrator | 2026-04-16 08:43:03.988181 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:43:03.988192 | orchestrator | Thursday 16 April 2026 08:42:49 +0000 (0:00:01.175) 0:56:56.479 ******** 2026-04-16 08:43:03.988202 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988212 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988222 | orchestrator | 2026-04-16 08:43:03.988231 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:43:03.988242 | orchestrator | Thursday 16 April 2026 08:42:51 +0000 (0:00:01.502) 0:56:57.982 ******** 2026-04-16 08:43:03.988261 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988272 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988282 | orchestrator | 2026-04-16 08:43:03.988292 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:43:03.988302 | orchestrator | Thursday 16 April 2026 08:42:52 +0000 (0:00:01.198) 0:56:59.181 ******** 2026-04-16 08:43:03.988312 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988322 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988332 | orchestrator | 2026-04-16 08:43:03.988342 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:43:03.988353 | orchestrator | Thursday 16 April 2026 08:42:53 +0000 (0:00:01.208) 0:57:00.390 ******** 2026-04-16 08:43:03.988362 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988373 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988383 | orchestrator | 2026-04-16 08:43:03.988393 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:43:03.988404 | orchestrator | Thursday 16 April 2026 08:42:54 +0000 (0:00:01.193) 0:57:01.584 ******** 2026-04-16 08:43:03.988414 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:03.988424 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:03.988434 | orchestrator | 2026-04-16 08:43:03.988444 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:43:03.988453 | orchestrator | Thursday 16 April 2026 08:42:56 +0000 (0:00:01.285) 0:57:02.869 ******** 2026-04-16 08:43:03.988462 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:43:03.988471 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:43:03.988480 | orchestrator | 2026-04-16 08:43:03.988488 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:43:03.988497 | orchestrator | Thursday 16 April 2026 08:43:00 +0000 (0:00:04.574) 0:57:07.444 ******** 2026-04-16 08:43:03.988505 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:43:03.988514 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:43:03.988522 | orchestrator | 2026-04-16 08:43:03.988530 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:43:03.988539 | orchestrator | Thursday 16 April 2026 08:43:01 +0000 (0:00:01.246) 0:57:08.691 ******** 2026-04-16 08:43:03.988550 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-16 08:43:03.988575 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-16 08:43:53.888846 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-16 08:43:53.889003 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-16 08:43:53.889020 | orchestrator | 2026-04-16 08:43:53.889053 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:43:53.889064 | orchestrator | Thursday 16 April 2026 08:43:07 +0000 (0:00:05.140) 0:57:13.831 ******** 2026-04-16 08:43:53.889073 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889083 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.889092 | orchestrator | 2026-04-16 08:43:53.889101 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:43:53.889110 | orchestrator | Thursday 16 April 2026 08:43:08 +0000 (0:00:01.280) 0:57:15.112 ******** 2026-04-16 08:43:53.889119 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889128 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.889136 | orchestrator | 2026-04-16 08:43:53.889146 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:43:53.889156 | orchestrator | Thursday 16 April 2026 08:43:09 +0000 (0:00:01.197) 0:57:16.310 ******** 2026-04-16 08:43:53.889164 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889173 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.889181 | orchestrator | 2026-04-16 08:43:53.889190 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:43:53.889199 | orchestrator | Thursday 16 April 2026 08:43:10 +0000 (0:00:01.231) 0:57:17.541 ******** 2026-04-16 08:43:53.889207 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889216 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.889224 | orchestrator | 2026-04-16 08:43:53.889233 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:43:53.889242 | orchestrator | Thursday 16 April 2026 08:43:12 +0000 (0:00:01.251) 0:57:18.792 ******** 2026-04-16 08:43:53.889250 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889259 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.889268 | orchestrator | 2026-04-16 08:43:53.889276 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:43:53.889285 | orchestrator | Thursday 16 April 2026 08:43:13 +0000 (0:00:01.287) 0:57:20.080 ******** 2026-04-16 08:43:53.889294 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.889303 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.889312 | orchestrator | 2026-04-16 08:43:53.889321 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:43:53.889329 | orchestrator | Thursday 16 April 2026 08:43:14 +0000 (0:00:01.656) 0:57:21.736 ******** 2026-04-16 08:43:53.889338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:43:53.889347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:43:53.889356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:43:53.889364 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889373 | orchestrator | 2026-04-16 08:43:53.889382 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:43:53.889392 | orchestrator | Thursday 16 April 2026 08:43:16 +0000 (0:00:01.430) 0:57:23.167 ******** 2026-04-16 08:43:53.889402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:43:53.889412 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:43:53.889423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:43:53.889433 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889443 | orchestrator | 2026-04-16 08:43:53.889453 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:43:53.889463 | orchestrator | Thursday 16 April 2026 08:43:17 +0000 (0:00:01.451) 0:57:24.618 ******** 2026-04-16 08:43:53.889472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:43:53.889480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:43:53.889489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:43:53.889497 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889513 | orchestrator | 2026-04-16 08:43:53.889522 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:43:53.889530 | orchestrator | Thursday 16 April 2026 08:43:19 +0000 (0:00:01.410) 0:57:26.029 ******** 2026-04-16 08:43:53.889539 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.889547 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.889556 | orchestrator | 2026-04-16 08:43:53.889565 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:43:53.889573 | orchestrator | Thursday 16 April 2026 08:43:20 +0000 (0:00:01.289) 0:57:27.319 ******** 2026-04-16 08:43:53.889582 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:43:53.889604 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 08:43:53.889613 | orchestrator | 2026-04-16 08:43:53.889621 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:43:53.889630 | orchestrator | Thursday 16 April 2026 08:43:21 +0000 (0:00:01.395) 0:57:28.715 ******** 2026-04-16 08:43:53.889639 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.889647 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.889656 | orchestrator | 2026-04-16 08:43:53.889680 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-04-16 08:43:53.889689 | orchestrator | Thursday 16 April 2026 08:43:23 +0000 (0:00:01.977) 0:57:30.693 ******** 2026-04-16 08:43:53.889698 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.889706 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.889715 | orchestrator | 2026-04-16 08:43:53.889724 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-04-16 08:43:53.889732 | orchestrator | Thursday 16 April 2026 08:43:25 +0000 (0:00:01.247) 0:57:31.940 ******** 2026-04-16 08:43:53.889741 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4 2026-04-16 08:43:53.889750 | orchestrator | 2026-04-16 08:43:53.889759 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-04-16 08:43:53.889768 | orchestrator | Thursday 16 April 2026 08:43:26 +0000 (0:00:01.207) 0:57:33.147 ******** 2026-04-16 08:43:53.889776 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-16 08:43:53.889785 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-04-16 08:43:53.889794 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-04-16 08:43:53.889802 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-04-16 08:43:53.889811 | orchestrator | 2026-04-16 08:43:53.889820 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-04-16 08:43:53.889828 | orchestrator | Thursday 16 April 2026 08:43:28 +0000 (0:00:02.010) 0:57:35.158 ******** 2026-04-16 08:43:53.889837 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:43:53.889846 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 08:43:53.889855 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:43:53.889863 | orchestrator | 2026-04-16 08:43:53.889872 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:43:53.889881 | orchestrator | Thursday 16 April 2026 08:43:31 +0000 (0:00:03.180) 0:57:38.339 ******** 2026-04-16 08:43:53.889889 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 08:43:53.889898 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 08:43:53.889907 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.889916 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 08:43:53.889924 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 08:43:53.889933 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.890079 | orchestrator | 2026-04-16 08:43:53.890097 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-04-16 08:43:53.890110 | orchestrator | Thursday 16 April 2026 08:43:33 +0000 (0:00:02.139) 0:57:40.478 ******** 2026-04-16 08:43:53.890123 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.890145 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.890158 | orchestrator | 2026-04-16 08:43:53.890171 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-04-16 08:43:53.890184 | orchestrator | Thursday 16 April 2026 08:43:35 +0000 (0:00:01.881) 0:57:42.360 ******** 2026-04-16 08:43:53.890197 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.890210 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:43:53.890224 | orchestrator | 2026-04-16 08:43:53.890237 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-04-16 08:43:53.890250 | orchestrator | Thursday 16 April 2026 08:43:36 +0000 (0:00:01.209) 0:57:43.569 ******** 2026-04-16 08:43:53.890262 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4 2026-04-16 08:43:53.890276 | orchestrator | 2026-04-16 08:43:53.890288 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-04-16 08:43:53.890302 | orchestrator | Thursday 16 April 2026 08:43:38 +0000 (0:00:01.226) 0:57:44.796 ******** 2026-04-16 08:43:53.890315 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4 2026-04-16 08:43:53.890328 | orchestrator | 2026-04-16 08:43:53.890340 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-04-16 08:43:53.890353 | orchestrator | Thursday 16 April 2026 08:43:39 +0000 (0:00:01.213) 0:57:46.010 ******** 2026-04-16 08:43:53.890366 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.890378 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.890392 | orchestrator | 2026-04-16 08:43:53.890405 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-04-16 08:43:53.890418 | orchestrator | Thursday 16 April 2026 08:43:41 +0000 (0:00:02.146) 0:57:48.157 ******** 2026-04-16 08:43:53.890431 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.890444 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.890458 | orchestrator | 2026-04-16 08:43:53.890470 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-04-16 08:43:53.890482 | orchestrator | Thursday 16 April 2026 08:43:43 +0000 (0:00:02.402) 0:57:50.559 ******** 2026-04-16 08:43:53.890493 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.890506 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.890519 | orchestrator | 2026-04-16 08:43:53.890533 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-04-16 08:43:53.890547 | orchestrator | Thursday 16 April 2026 08:43:46 +0000 (0:00:02.522) 0:57:53.082 ******** 2026-04-16 08:43:53.890558 | orchestrator | changed: [testbed-node-3] 2026-04-16 08:43:53.890566 | orchestrator | changed: [testbed-node-4] 2026-04-16 08:43:53.890574 | orchestrator | 2026-04-16 08:43:53.890582 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-04-16 08:43:53.890589 | orchestrator | Thursday 16 April 2026 08:43:50 +0000 (0:00:03.720) 0:57:56.803 ******** 2026-04-16 08:43:53.890605 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:43:53.890613 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:43:53.890621 | orchestrator | 2026-04-16 08:43:53.890629 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-04-16 08:43:53.890637 | orchestrator | Thursday 16 April 2026 08:43:51 +0000 (0:00:01.745) 0:57:58.548 ******** 2026-04-16 08:43:53.890645 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:43:53.890661 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:44:17.198418 | orchestrator | 2026-04-16 08:44:17.198551 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-16 08:44:17.198575 | orchestrator | 2026-04-16 08:44:17.198591 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:44:17.198607 | orchestrator | Thursday 16 April 2026 08:43:55 +0000 (0:00:03.323) 0:58:01.871 ******** 2026-04-16 08:44:17.198622 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-04-16 08:44:17.198637 | orchestrator | 2026-04-16 08:44:17.198652 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:44:17.198695 | orchestrator | Thursday 16 April 2026 08:43:56 +0000 (0:00:01.135) 0:58:03.007 ******** 2026-04-16 08:44:17.198712 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.198728 | orchestrator | 2026-04-16 08:44:17.198742 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:44:17.198758 | orchestrator | Thursday 16 April 2026 08:43:57 +0000 (0:00:01.418) 0:58:04.426 ******** 2026-04-16 08:44:17.198774 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.198789 | orchestrator | 2026-04-16 08:44:17.198804 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:44:17.198818 | orchestrator | Thursday 16 April 2026 08:43:58 +0000 (0:00:01.082) 0:58:05.508 ******** 2026-04-16 08:44:17.198833 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.198848 | orchestrator | 2026-04-16 08:44:17.198862 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:44:17.198878 | orchestrator | Thursday 16 April 2026 08:44:00 +0000 (0:00:01.419) 0:58:06.927 ******** 2026-04-16 08:44:17.198892 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.198906 | orchestrator | 2026-04-16 08:44:17.198921 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:44:17.198935 | orchestrator | Thursday 16 April 2026 08:44:01 +0000 (0:00:01.160) 0:58:08.088 ******** 2026-04-16 08:44:17.199042 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.199063 | orchestrator | 2026-04-16 08:44:17.199078 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:44:17.199094 | orchestrator | Thursday 16 April 2026 08:44:02 +0000 (0:00:01.101) 0:58:09.189 ******** 2026-04-16 08:44:17.199108 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.199123 | orchestrator | 2026-04-16 08:44:17.199138 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:44:17.199154 | orchestrator | Thursday 16 April 2026 08:44:03 +0000 (0:00:01.139) 0:58:10.328 ******** 2026-04-16 08:44:17.199171 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:17.199187 | orchestrator | 2026-04-16 08:44:17.199201 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:44:17.199216 | orchestrator | Thursday 16 April 2026 08:44:04 +0000 (0:00:01.129) 0:58:11.458 ******** 2026-04-16 08:44:17.199231 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.199245 | orchestrator | 2026-04-16 08:44:17.199259 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:44:17.199274 | orchestrator | Thursday 16 April 2026 08:44:05 +0000 (0:00:01.136) 0:58:12.594 ******** 2026-04-16 08:44:17.199288 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:44:17.199303 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:44:17.199318 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:44:17.199333 | orchestrator | 2026-04-16 08:44:17.199347 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:44:17.199362 | orchestrator | Thursday 16 April 2026 08:44:07 +0000 (0:00:01.957) 0:58:14.552 ******** 2026-04-16 08:44:17.199376 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:17.199390 | orchestrator | 2026-04-16 08:44:17.199404 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:44:17.199419 | orchestrator | Thursday 16 April 2026 08:44:09 +0000 (0:00:01.335) 0:58:15.888 ******** 2026-04-16 08:44:17.199433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:44:17.199447 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:44:17.199461 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:44:17.199475 | orchestrator | 2026-04-16 08:44:17.199490 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:44:17.199517 | orchestrator | Thursday 16 April 2026 08:44:12 +0000 (0:00:03.167) 0:58:19.055 ******** 2026-04-16 08:44:17.199533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 08:44:17.199548 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 08:44:17.199563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 08:44:17.199577 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:17.199593 | orchestrator | 2026-04-16 08:44:17.199609 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:44:17.199623 | orchestrator | Thursday 16 April 2026 08:44:14 +0000 (0:00:01.932) 0:58:20.988 ******** 2026-04-16 08:44:17.199639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:44:17.199666 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:44:17.199693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:44:17.199703 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:17.199712 | orchestrator | 2026-04-16 08:44:17.199721 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:44:17.199729 | orchestrator | Thursday 16 April 2026 08:44:15 +0000 (0:00:01.617) 0:58:22.606 ******** 2026-04-16 08:44:17.199740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:17.199753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:17.199762 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:17.199770 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:17.199779 | orchestrator | 2026-04-16 08:44:17.199788 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:44:17.199796 | orchestrator | Thursday 16 April 2026 08:44:16 +0000 (0:00:01.148) 0:58:23.755 ******** 2026-04-16 08:44:17.199807 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:44:09.990244', 'end': '2026-04-16 08:44:10.039386', 'delta': '0:00:00.049142', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:44:17.199826 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:44:10.522442', 'end': '2026-04-16 08:44:10.583741', 'delta': '0:00:00.061299', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:44:17.199840 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:44:11.113206', 'end': '2026-04-16 08:44:11.170507', 'delta': '0:00:00.057301', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:44:17.199849 | orchestrator | 2026-04-16 08:44:17.199863 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:44:35.316113 | orchestrator | Thursday 16 April 2026 08:44:18 +0000 (0:00:01.172) 0:58:24.928 ******** 2026-04-16 08:44:35.316239 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:35.316260 | orchestrator | 2026-04-16 08:44:35.316271 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:44:35.316280 | orchestrator | Thursday 16 April 2026 08:44:19 +0000 (0:00:01.231) 0:58:26.159 ******** 2026-04-16 08:44:35.316293 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316310 | orchestrator | 2026-04-16 08:44:35.316330 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:44:35.316344 | orchestrator | Thursday 16 April 2026 08:44:20 +0000 (0:00:01.231) 0:58:27.391 ******** 2026-04-16 08:44:35.316357 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:35.316370 | orchestrator | 2026-04-16 08:44:35.316384 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:44:35.316396 | orchestrator | Thursday 16 April 2026 08:44:21 +0000 (0:00:01.108) 0:58:28.500 ******** 2026-04-16 08:44:35.316407 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:44:35.316421 | orchestrator | 2026-04-16 08:44:35.316435 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:44:35.316448 | orchestrator | Thursday 16 April 2026 08:44:23 +0000 (0:00:01.966) 0:58:30.467 ******** 2026-04-16 08:44:35.316462 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:35.316475 | orchestrator | 2026-04-16 08:44:35.316488 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:44:35.316501 | orchestrator | Thursday 16 April 2026 08:44:24 +0000 (0:00:01.109) 0:58:31.576 ******** 2026-04-16 08:44:35.316514 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316527 | orchestrator | 2026-04-16 08:44:35.316540 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:44:35.316554 | orchestrator | Thursday 16 April 2026 08:44:25 +0000 (0:00:01.097) 0:58:32.674 ******** 2026-04-16 08:44:35.316567 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316581 | orchestrator | 2026-04-16 08:44:35.316597 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:44:35.316646 | orchestrator | Thursday 16 April 2026 08:44:27 +0000 (0:00:01.273) 0:58:33.947 ******** 2026-04-16 08:44:35.316660 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316672 | orchestrator | 2026-04-16 08:44:35.316684 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:44:35.316697 | orchestrator | Thursday 16 April 2026 08:44:28 +0000 (0:00:01.125) 0:58:35.072 ******** 2026-04-16 08:44:35.316710 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316724 | orchestrator | 2026-04-16 08:44:35.316738 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:44:35.316752 | orchestrator | Thursday 16 April 2026 08:44:29 +0000 (0:00:01.121) 0:58:36.194 ******** 2026-04-16 08:44:35.316786 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:35.316800 | orchestrator | 2026-04-16 08:44:35.316821 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:44:35.316830 | orchestrator | Thursday 16 April 2026 08:44:30 +0000 (0:00:01.202) 0:58:37.396 ******** 2026-04-16 08:44:35.316840 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316849 | orchestrator | 2026-04-16 08:44:35.316858 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:44:35.316867 | orchestrator | Thursday 16 April 2026 08:44:31 +0000 (0:00:01.108) 0:58:38.504 ******** 2026-04-16 08:44:35.316876 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:35.316885 | orchestrator | 2026-04-16 08:44:35.316894 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:44:35.316903 | orchestrator | Thursday 16 April 2026 08:44:32 +0000 (0:00:01.144) 0:58:39.648 ******** 2026-04-16 08:44:35.316911 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:35.316919 | orchestrator | 2026-04-16 08:44:35.316927 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:44:35.316935 | orchestrator | Thursday 16 April 2026 08:44:33 +0000 (0:00:01.086) 0:58:40.735 ******** 2026-04-16 08:44:35.316943 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:44:35.316951 | orchestrator | 2026-04-16 08:44:35.316982 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:44:35.316992 | orchestrator | Thursday 16 April 2026 08:44:35 +0000 (0:00:01.137) 0:58:41.872 ******** 2026-04-16 08:44:35.317003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:35.317030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}})  2026-04-16 08:44:35.317062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:44:35.317081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}})  2026-04-16 08:44:35.317090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:35.317099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:35.317108 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:44:35.317117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:35.317130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:44:35.317146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:36.615797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}})  2026-04-16 08:44:36.615928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}})  2026-04-16 08:44:36.615945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:36.616027 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:44:36.616075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:36.616098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:44:36.616121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:44:36.616149 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:44:36.616189 | orchestrator | 2026-04-16 08:44:36.616210 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:44:36.616228 | orchestrator | Thursday 16 April 2026 08:44:36 +0000 (0:00:01.366) 0:58:43.238 ******** 2026-04-16 08:44:36.616248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.616268 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab', 'dm-uuid-LVM-s1RJewCEMmndeMDp9Spc64rvcerwSGzbQbQl1KeLuYCbn8R8b84zAGP266l0jlxg'], 'uuids': ['e9f76026-4aae-4cda-b4a7-e0cc49e3ab39'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.616296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb', 'scsi-SQEMU_QEMU_HARDDISK_2cf3122c-2131-4b44-b1eb-9d24190083bb'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '2cf3122c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.616345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-xUmyeI-bWmv-U8FU-AfUK-Rvd0-z7ET-AdgXoZ', 'scsi-0QEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d', 'scsi-SQEMU_QEMU_HARDDISK_9b00dc68-d40c-4d0e-a7b6-6fb44f0c533d'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732569 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-45-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732698 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732726 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503', 'dm-uuid-CRYPT-LUKS2-5ffaaf022b774dc4a91bc2ef115e9266-yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--c8cebb68--f409--516c--8b4d--2b5a47d5dab9-osd--block--c8cebb68--f409--516c--8b4d--2b5a47d5dab9', 'dm-uuid-LVM-PPzpqRHnsjL1vEIDI7UMYdPp527zonCNyBKcCiIok426ljmKDKBR2TfsU2c2q503'], 'uuids': ['5ffaaf02-2b77-4dc4-a91b-c2ef115e9266'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9b00dc68', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['yBKcCi-Iok4-26lj-mKDK-BR2T-fsU2-c2q503']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-hVwBBs-KeT7-naye-LPpU-SNff-cx0t-U2KIoO', 'scsi-0QEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834', 'scsi-SQEMU_QEMU_HARDDISK_68199fda-8c99-469d-abab-c5a57188e834'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '68199fda', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5d85d6a1--6c0d--5a96--8279--fc702a5664ab-osd--block--5d85d6a1--6c0d--5a96--8279--fc702a5664ab']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732821 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:44:36.732848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '375db26a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1', 'scsi-SQEMU_QEMU_HARDDISK_375db26a-2184-4380-988d-01ed4e876c64-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:45:05.142363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:45:05.142476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:45:05.142494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg', 'dm-uuid-CRYPT-LUKS2-e9f760264aae4cdab4a7e0cc49e3ab39-QbQl1K-eLuY-Cbn8-R8b8-4zAG-P266-l0jlxg'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:45:05.142533 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.142548 | orchestrator | 2026-04-16 08:45:05.142561 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:45:05.142587 | orchestrator | Thursday 16 April 2026 08:44:37 +0000 (0:00:01.381) 0:58:44.621 ******** 2026-04-16 08:45:05.142599 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:05.142611 | orchestrator | 2026-04-16 08:45:05.142623 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:45:05.142634 | orchestrator | Thursday 16 April 2026 08:44:39 +0000 (0:00:01.568) 0:58:46.189 ******** 2026-04-16 08:45:05.142645 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:05.142655 | orchestrator | 2026-04-16 08:45:05.142666 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:45:05.142677 | orchestrator | Thursday 16 April 2026 08:44:40 +0000 (0:00:01.123) 0:58:47.312 ******** 2026-04-16 08:45:05.142688 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:05.142699 | orchestrator | 2026-04-16 08:45:05.142709 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:45:05.142720 | orchestrator | Thursday 16 April 2026 08:44:42 +0000 (0:00:01.478) 0:58:48.791 ******** 2026-04-16 08:45:05.142731 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.142742 | orchestrator | 2026-04-16 08:45:05.142753 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:45:05.142763 | orchestrator | Thursday 16 April 2026 08:44:43 +0000 (0:00:01.197) 0:58:49.988 ******** 2026-04-16 08:45:05.142774 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.142785 | orchestrator | 2026-04-16 08:45:05.142796 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:45:05.142806 | orchestrator | Thursday 16 April 2026 08:44:44 +0000 (0:00:01.249) 0:58:51.238 ******** 2026-04-16 08:45:05.142817 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.142828 | orchestrator | 2026-04-16 08:45:05.142839 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:45:05.142849 | orchestrator | Thursday 16 April 2026 08:44:45 +0000 (0:00:01.126) 0:58:52.365 ******** 2026-04-16 08:45:05.142860 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-04-16 08:45:05.142871 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-04-16 08:45:05.142882 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-04-16 08:45:05.142896 | orchestrator | 2026-04-16 08:45:05.142908 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:45:05.142921 | orchestrator | Thursday 16 April 2026 08:44:47 +0000 (0:00:01.967) 0:58:54.333 ******** 2026-04-16 08:45:05.142934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-04-16 08:45:05.142947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-04-16 08:45:05.142960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-04-16 08:45:05.142999 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143012 | orchestrator | 2026-04-16 08:45:05.143025 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:45:05.143037 | orchestrator | Thursday 16 April 2026 08:44:48 +0000 (0:00:01.177) 0:58:55.510 ******** 2026-04-16 08:45:05.143067 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-04-16 08:45:05.143081 | orchestrator | 2026-04-16 08:45:05.143095 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:45:05.143109 | orchestrator | Thursday 16 April 2026 08:44:49 +0000 (0:00:01.115) 0:58:56.626 ******** 2026-04-16 08:45:05.143122 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143135 | orchestrator | 2026-04-16 08:45:05.143147 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:45:05.143160 | orchestrator | Thursday 16 April 2026 08:44:50 +0000 (0:00:01.123) 0:58:57.750 ******** 2026-04-16 08:45:05.143182 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143195 | orchestrator | 2026-04-16 08:45:05.143208 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:45:05.143221 | orchestrator | Thursday 16 April 2026 08:44:52 +0000 (0:00:01.103) 0:58:58.854 ******** 2026-04-16 08:45:05.143234 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143247 | orchestrator | 2026-04-16 08:45:05.143260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:45:05.143278 | orchestrator | Thursday 16 April 2026 08:44:53 +0000 (0:00:01.120) 0:58:59.975 ******** 2026-04-16 08:45:05.143297 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:05.143315 | orchestrator | 2026-04-16 08:45:05.143334 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:45:05.143345 | orchestrator | Thursday 16 April 2026 08:44:54 +0000 (0:00:01.214) 0:59:01.189 ******** 2026-04-16 08:45:05.143356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:45:05.143367 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:45:05.143378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:45:05.143388 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143399 | orchestrator | 2026-04-16 08:45:05.143410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:45:05.143421 | orchestrator | Thursday 16 April 2026 08:44:55 +0000 (0:00:01.408) 0:59:02.597 ******** 2026-04-16 08:45:05.143432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:45:05.143443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:45:05.143453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:45:05.143464 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143475 | orchestrator | 2026-04-16 08:45:05.143485 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:45:05.143496 | orchestrator | Thursday 16 April 2026 08:44:57 +0000 (0:00:01.366) 0:59:03.964 ******** 2026-04-16 08:45:05.143507 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:45:05.143517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:45:05.143528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:45:05.143545 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:05.143556 | orchestrator | 2026-04-16 08:45:05.143567 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:45:05.143577 | orchestrator | Thursday 16 April 2026 08:44:58 +0000 (0:00:01.352) 0:59:05.316 ******** 2026-04-16 08:45:05.143588 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:05.143599 | orchestrator | 2026-04-16 08:45:05.143610 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:45:05.143621 | orchestrator | Thursday 16 April 2026 08:44:59 +0000 (0:00:01.147) 0:59:06.464 ******** 2026-04-16 08:45:05.143632 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:45:05.143642 | orchestrator | 2026-04-16 08:45:05.143653 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:45:05.143664 | orchestrator | Thursday 16 April 2026 08:45:01 +0000 (0:00:01.664) 0:59:08.129 ******** 2026-04-16 08:45:05.143675 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:45:05.143686 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:45:05.143697 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:45:05.143710 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 08:45:05.143729 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:45:05.143746 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:45:05.143762 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:45:05.143791 | orchestrator | 2026-04-16 08:45:05.143809 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:45:05.143825 | orchestrator | Thursday 16 April 2026 08:45:03 +0000 (0:00:02.140) 0:59:10.269 ******** 2026-04-16 08:45:05.143840 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:45:05.143859 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:45:05.143875 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:45:05.143893 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-04-16 08:45:05.143912 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:45:05.143929 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:45:05.143947 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:45:05.143966 | orchestrator | 2026-04-16 08:45:05.144026 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-16 08:45:58.278467 | orchestrator | Thursday 16 April 2026 08:45:06 +0000 (0:00:02.555) 0:59:12.825 ******** 2026-04-16 08:45:58.278562 | orchestrator | changed: [testbed-node-3] 2026-04-16 08:45:58.278572 | orchestrator | 2026-04-16 08:45:58.278580 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-16 08:45:58.278588 | orchestrator | Thursday 16 April 2026 08:45:08 +0000 (0:00:02.241) 0:59:15.067 ******** 2026-04-16 08:45:58.278596 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:45:58.278604 | orchestrator | 2026-04-16 08:45:58.278611 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-16 08:45:58.278618 | orchestrator | Thursday 16 April 2026 08:45:11 +0000 (0:00:03.119) 0:59:18.187 ******** 2026-04-16 08:45:58.278625 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:45:58.278631 | orchestrator | 2026-04-16 08:45:58.278638 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:45:58.278645 | orchestrator | Thursday 16 April 2026 08:45:13 +0000 (0:00:02.286) 0:59:20.473 ******** 2026-04-16 08:45:58.278652 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-04-16 08:45:58.278659 | orchestrator | 2026-04-16 08:45:58.278665 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:45:58.278683 | orchestrator | Thursday 16 April 2026 08:45:14 +0000 (0:00:01.135) 0:59:21.609 ******** 2026-04-16 08:45:58.278690 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-04-16 08:45:58.278697 | orchestrator | 2026-04-16 08:45:58.278704 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:45:58.278710 | orchestrator | Thursday 16 April 2026 08:45:15 +0000 (0:00:01.130) 0:59:22.739 ******** 2026-04-16 08:45:58.278717 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.278724 | orchestrator | 2026-04-16 08:45:58.278730 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:45:58.278737 | orchestrator | Thursday 16 April 2026 08:45:17 +0000 (0:00:01.126) 0:59:23.866 ******** 2026-04-16 08:45:58.278744 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.278752 | orchestrator | 2026-04-16 08:45:58.278758 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:45:58.278765 | orchestrator | Thursday 16 April 2026 08:45:18 +0000 (0:00:01.502) 0:59:25.369 ******** 2026-04-16 08:45:58.278772 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.278779 | orchestrator | 2026-04-16 08:45:58.278786 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:45:58.278812 | orchestrator | Thursday 16 April 2026 08:45:20 +0000 (0:00:01.498) 0:59:26.867 ******** 2026-04-16 08:45:58.278819 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.278826 | orchestrator | 2026-04-16 08:45:58.278843 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:45:58.278850 | orchestrator | Thursday 16 April 2026 08:45:21 +0000 (0:00:01.524) 0:59:28.392 ******** 2026-04-16 08:45:58.278857 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.278864 | orchestrator | 2026-04-16 08:45:58.278871 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:45:58.278877 | orchestrator | Thursday 16 April 2026 08:45:22 +0000 (0:00:01.116) 0:59:29.509 ******** 2026-04-16 08:45:58.278884 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.278891 | orchestrator | 2026-04-16 08:45:58.278897 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:45:58.278904 | orchestrator | Thursday 16 April 2026 08:45:23 +0000 (0:00:01.111) 0:59:30.621 ******** 2026-04-16 08:45:58.278911 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.278917 | orchestrator | 2026-04-16 08:45:58.278924 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:45:58.278931 | orchestrator | Thursday 16 April 2026 08:45:24 +0000 (0:00:01.132) 0:59:31.754 ******** 2026-04-16 08:45:58.278937 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.278944 | orchestrator | 2026-04-16 08:45:58.278951 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:45:58.278957 | orchestrator | Thursday 16 April 2026 08:45:26 +0000 (0:00:01.512) 0:59:33.266 ******** 2026-04-16 08:45:58.278964 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.278971 | orchestrator | 2026-04-16 08:45:58.278977 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:45:58.278984 | orchestrator | Thursday 16 April 2026 08:45:28 +0000 (0:00:01.538) 0:59:34.804 ******** 2026-04-16 08:45:58.278990 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279047 | orchestrator | 2026-04-16 08:45:58.279058 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:45:58.279066 | orchestrator | Thursday 16 April 2026 08:45:29 +0000 (0:00:01.098) 0:59:35.903 ******** 2026-04-16 08:45:58.279074 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279083 | orchestrator | 2026-04-16 08:45:58.279090 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:45:58.279098 | orchestrator | Thursday 16 April 2026 08:45:30 +0000 (0:00:01.106) 0:59:37.010 ******** 2026-04-16 08:45:58.279105 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279113 | orchestrator | 2026-04-16 08:45:58.279121 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:45:58.279128 | orchestrator | Thursday 16 April 2026 08:45:31 +0000 (0:00:01.126) 0:59:38.136 ******** 2026-04-16 08:45:58.279136 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279144 | orchestrator | 2026-04-16 08:45:58.279152 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:45:58.279159 | orchestrator | Thursday 16 April 2026 08:45:32 +0000 (0:00:01.128) 0:59:39.264 ******** 2026-04-16 08:45:58.279169 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279180 | orchestrator | 2026-04-16 08:45:58.279213 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:45:58.279228 | orchestrator | Thursday 16 April 2026 08:45:33 +0000 (0:00:01.124) 0:59:40.389 ******** 2026-04-16 08:45:58.279239 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279250 | orchestrator | 2026-04-16 08:45:58.279261 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:45:58.279272 | orchestrator | Thursday 16 April 2026 08:45:34 +0000 (0:00:01.131) 0:59:41.521 ******** 2026-04-16 08:45:58.279283 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279294 | orchestrator | 2026-04-16 08:45:58.279305 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:45:58.279326 | orchestrator | Thursday 16 April 2026 08:45:35 +0000 (0:00:01.132) 0:59:42.653 ******** 2026-04-16 08:45:58.279336 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279347 | orchestrator | 2026-04-16 08:45:58.279358 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:45:58.279371 | orchestrator | Thursday 16 April 2026 08:45:36 +0000 (0:00:01.101) 0:59:43.754 ******** 2026-04-16 08:45:58.279382 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279393 | orchestrator | 2026-04-16 08:45:58.279403 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:45:58.279413 | orchestrator | Thursday 16 April 2026 08:45:38 +0000 (0:00:01.153) 0:59:44.907 ******** 2026-04-16 08:45:58.279423 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279434 | orchestrator | 2026-04-16 08:45:58.279445 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:45:58.279456 | orchestrator | Thursday 16 April 2026 08:45:39 +0000 (0:00:01.118) 0:59:46.026 ******** 2026-04-16 08:45:58.279467 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279478 | orchestrator | 2026-04-16 08:45:58.279489 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:45:58.279500 | orchestrator | Thursday 16 April 2026 08:45:40 +0000 (0:00:01.105) 0:59:47.131 ******** 2026-04-16 08:45:58.279512 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279524 | orchestrator | 2026-04-16 08:45:58.279536 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:45:58.279547 | orchestrator | Thursday 16 April 2026 08:45:41 +0000 (0:00:01.099) 0:59:48.230 ******** 2026-04-16 08:45:58.279559 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279571 | orchestrator | 2026-04-16 08:45:58.279583 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:45:58.279595 | orchestrator | Thursday 16 April 2026 08:45:42 +0000 (0:00:01.167) 0:59:49.398 ******** 2026-04-16 08:45:58.279605 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279612 | orchestrator | 2026-04-16 08:45:58.279619 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:45:58.279625 | orchestrator | Thursday 16 April 2026 08:45:43 +0000 (0:00:01.120) 0:59:50.519 ******** 2026-04-16 08:45:58.279632 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279639 | orchestrator | 2026-04-16 08:45:58.279645 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:45:58.279659 | orchestrator | Thursday 16 April 2026 08:45:44 +0000 (0:00:01.151) 0:59:51.670 ******** 2026-04-16 08:45:58.279666 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279672 | orchestrator | 2026-04-16 08:45:58.279679 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:45:58.279685 | orchestrator | Thursday 16 April 2026 08:45:46 +0000 (0:00:01.167) 0:59:52.838 ******** 2026-04-16 08:45:58.279692 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279698 | orchestrator | 2026-04-16 08:45:58.279705 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:45:58.279712 | orchestrator | Thursday 16 April 2026 08:45:47 +0000 (0:00:01.102) 0:59:53.940 ******** 2026-04-16 08:45:58.279719 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279726 | orchestrator | 2026-04-16 08:45:58.279732 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:45:58.279739 | orchestrator | Thursday 16 April 2026 08:45:48 +0000 (0:00:01.089) 0:59:55.030 ******** 2026-04-16 08:45:58.279745 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279752 | orchestrator | 2026-04-16 08:45:58.279758 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:45:58.279765 | orchestrator | Thursday 16 April 2026 08:45:49 +0000 (0:00:01.120) 0:59:56.150 ******** 2026-04-16 08:45:58.279772 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279782 | orchestrator | 2026-04-16 08:45:58.279796 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:45:58.279821 | orchestrator | Thursday 16 April 2026 08:45:50 +0000 (0:00:01.127) 0:59:57.278 ******** 2026-04-16 08:45:58.279832 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279843 | orchestrator | 2026-04-16 08:45:58.279854 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:45:58.279864 | orchestrator | Thursday 16 April 2026 08:45:51 +0000 (0:00:01.154) 0:59:58.433 ******** 2026-04-16 08:45:58.279876 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:45:58.279887 | orchestrator | 2026-04-16 08:45:58.279898 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:45:58.279909 | orchestrator | Thursday 16 April 2026 08:45:52 +0000 (0:00:01.105) 0:59:59.538 ******** 2026-04-16 08:45:58.279920 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279928 | orchestrator | 2026-04-16 08:45:58.279934 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:45:58.279941 | orchestrator | Thursday 16 April 2026 08:45:54 +0000 (0:00:02.084) 1:00:01.622 ******** 2026-04-16 08:45:58.279948 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:45:58.279954 | orchestrator | 2026-04-16 08:45:58.279961 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:45:58.279968 | orchestrator | Thursday 16 April 2026 08:45:57 +0000 (0:00:02.247) 1:00:03.870 ******** 2026-04-16 08:45:58.279974 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-04-16 08:45:58.279981 | orchestrator | 2026-04-16 08:45:58.279988 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:45:58.280021 | orchestrator | Thursday 16 April 2026 08:45:58 +0000 (0:00:01.153) 1:00:05.023 ******** 2026-04-16 08:46:44.190349 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.190480 | orchestrator | 2026-04-16 08:46:44.190498 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:46:44.190512 | orchestrator | Thursday 16 April 2026 08:45:59 +0000 (0:00:01.120) 1:00:06.143 ******** 2026-04-16 08:46:44.190524 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.190535 | orchestrator | 2026-04-16 08:46:44.190547 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:46:44.190558 | orchestrator | Thursday 16 April 2026 08:46:00 +0000 (0:00:01.127) 1:00:07.271 ******** 2026-04-16 08:46:44.190569 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:46:44.190581 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:46:44.190592 | orchestrator | 2026-04-16 08:46:44.190603 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:46:44.190618 | orchestrator | Thursday 16 April 2026 08:46:02 +0000 (0:00:01.806) 1:00:09.077 ******** 2026-04-16 08:46:44.190645 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:46:44.190668 | orchestrator | 2026-04-16 08:46:44.190685 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:46:44.190702 | orchestrator | Thursday 16 April 2026 08:46:03 +0000 (0:00:01.457) 1:00:10.535 ******** 2026-04-16 08:46:44.190717 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.190735 | orchestrator | 2026-04-16 08:46:44.190753 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:46:44.190773 | orchestrator | Thursday 16 April 2026 08:46:04 +0000 (0:00:01.132) 1:00:11.668 ******** 2026-04-16 08:46:44.190791 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.190809 | orchestrator | 2026-04-16 08:46:44.190827 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:46:44.190846 | orchestrator | Thursday 16 April 2026 08:46:06 +0000 (0:00:01.138) 1:00:12.807 ******** 2026-04-16 08:46:44.190864 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.190881 | orchestrator | 2026-04-16 08:46:44.190900 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:46:44.190920 | orchestrator | Thursday 16 April 2026 08:46:07 +0000 (0:00:01.133) 1:00:13.940 ******** 2026-04-16 08:46:44.190975 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-04-16 08:46:44.190989 | orchestrator | 2026-04-16 08:46:44.191000 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:46:44.191011 | orchestrator | Thursday 16 April 2026 08:46:08 +0000 (0:00:01.107) 1:00:15.048 ******** 2026-04-16 08:46:44.191057 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:46:44.191069 | orchestrator | 2026-04-16 08:46:44.191079 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:46:44.191090 | orchestrator | Thursday 16 April 2026 08:46:09 +0000 (0:00:01.644) 1:00:16.693 ******** 2026-04-16 08:46:44.191102 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:46:44.191113 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:46:44.191124 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:46:44.191135 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191146 | orchestrator | 2026-04-16 08:46:44.191157 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:46:44.191168 | orchestrator | Thursday 16 April 2026 08:46:11 +0000 (0:00:01.123) 1:00:17.816 ******** 2026-04-16 08:46:44.191179 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191189 | orchestrator | 2026-04-16 08:46:44.191200 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:46:44.191210 | orchestrator | Thursday 16 April 2026 08:46:12 +0000 (0:00:01.112) 1:00:18.928 ******** 2026-04-16 08:46:44.191221 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191232 | orchestrator | 2026-04-16 08:46:44.191242 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:46:44.191253 | orchestrator | Thursday 16 April 2026 08:46:13 +0000 (0:00:01.228) 1:00:20.157 ******** 2026-04-16 08:46:44.191263 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191274 | orchestrator | 2026-04-16 08:46:44.191284 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:46:44.191295 | orchestrator | Thursday 16 April 2026 08:46:14 +0000 (0:00:01.185) 1:00:21.343 ******** 2026-04-16 08:46:44.191306 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191316 | orchestrator | 2026-04-16 08:46:44.191327 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:46:44.191337 | orchestrator | Thursday 16 April 2026 08:46:15 +0000 (0:00:01.102) 1:00:22.446 ******** 2026-04-16 08:46:44.191460 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191498 | orchestrator | 2026-04-16 08:46:44.191522 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:46:44.191541 | orchestrator | Thursday 16 April 2026 08:46:16 +0000 (0:00:01.109) 1:00:23.556 ******** 2026-04-16 08:46:44.191559 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:46:44.191577 | orchestrator | 2026-04-16 08:46:44.191593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:46:44.191609 | orchestrator | Thursday 16 April 2026 08:46:19 +0000 (0:00:02.480) 1:00:26.036 ******** 2026-04-16 08:46:44.191628 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:46:44.191647 | orchestrator | 2026-04-16 08:46:44.191666 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:46:44.191686 | orchestrator | Thursday 16 April 2026 08:46:20 +0000 (0:00:01.118) 1:00:27.155 ******** 2026-04-16 08:46:44.191707 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-04-16 08:46:44.191727 | orchestrator | 2026-04-16 08:46:44.191742 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:46:44.191776 | orchestrator | Thursday 16 April 2026 08:46:21 +0000 (0:00:01.188) 1:00:28.344 ******** 2026-04-16 08:46:44.191787 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191802 | orchestrator | 2026-04-16 08:46:44.191845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:46:44.191868 | orchestrator | Thursday 16 April 2026 08:46:22 +0000 (0:00:01.112) 1:00:29.456 ******** 2026-04-16 08:46:44.191887 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191906 | orchestrator | 2026-04-16 08:46:44.191925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:46:44.191943 | orchestrator | Thursday 16 April 2026 08:46:23 +0000 (0:00:01.132) 1:00:30.589 ******** 2026-04-16 08:46:44.191959 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.191970 | orchestrator | 2026-04-16 08:46:44.191981 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:46:44.191992 | orchestrator | Thursday 16 April 2026 08:46:24 +0000 (0:00:01.160) 1:00:31.749 ******** 2026-04-16 08:46:44.192002 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.192013 | orchestrator | 2026-04-16 08:46:44.192086 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:46:44.192097 | orchestrator | Thursday 16 April 2026 08:46:26 +0000 (0:00:01.143) 1:00:32.892 ******** 2026-04-16 08:46:44.192108 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.192119 | orchestrator | 2026-04-16 08:46:44.192130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:46:44.192141 | orchestrator | Thursday 16 April 2026 08:46:27 +0000 (0:00:01.161) 1:00:34.054 ******** 2026-04-16 08:46:44.192152 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.192163 | orchestrator | 2026-04-16 08:46:44.192174 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:46:44.192184 | orchestrator | Thursday 16 April 2026 08:46:28 +0000 (0:00:01.154) 1:00:35.209 ******** 2026-04-16 08:46:44.192195 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.192206 | orchestrator | 2026-04-16 08:46:44.192217 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:46:44.192227 | orchestrator | Thursday 16 April 2026 08:46:29 +0000 (0:00:01.104) 1:00:36.314 ******** 2026-04-16 08:46:44.192238 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:46:44.192249 | orchestrator | 2026-04-16 08:46:44.192259 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:46:44.192270 | orchestrator | Thursday 16 April 2026 08:46:30 +0000 (0:00:01.153) 1:00:37.467 ******** 2026-04-16 08:46:44.192281 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:46:44.192292 | orchestrator | 2026-04-16 08:46:44.192302 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:46:44.192313 | orchestrator | Thursday 16 April 2026 08:46:31 +0000 (0:00:01.133) 1:00:38.600 ******** 2026-04-16 08:46:44.192324 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-04-16 08:46:44.192335 | orchestrator | 2026-04-16 08:46:44.192346 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:46:44.192365 | orchestrator | Thursday 16 April 2026 08:46:32 +0000 (0:00:01.101) 1:00:39.702 ******** 2026-04-16 08:46:44.192376 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-04-16 08:46:44.192387 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-04-16 08:46:44.192398 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-04-16 08:46:44.192409 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-04-16 08:46:44.192420 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-04-16 08:46:44.192431 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-04-16 08:46:44.192441 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-04-16 08:46:44.192452 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:46:44.192463 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:46:44.192474 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:46:44.192485 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:46:44.192508 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:46:44.192519 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:46:44.192530 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:46:44.192542 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-04-16 08:46:44.192553 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-04-16 08:46:44.192563 | orchestrator | 2026-04-16 08:46:44.192574 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:46:44.192585 | orchestrator | Thursday 16 April 2026 08:46:39 +0000 (0:00:06.588) 1:00:46.291 ******** 2026-04-16 08:46:44.192596 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-04-16 08:46:44.192607 | orchestrator | 2026-04-16 08:46:44.192618 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:46:44.192629 | orchestrator | Thursday 16 April 2026 08:46:40 +0000 (0:00:01.178) 1:00:47.469 ******** 2026-04-16 08:46:44.192639 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:46:44.192652 | orchestrator | 2026-04-16 08:46:44.192662 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:46:44.192673 | orchestrator | Thursday 16 April 2026 08:46:42 +0000 (0:00:01.449) 1:00:48.918 ******** 2026-04-16 08:46:44.192684 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:46:44.192695 | orchestrator | 2026-04-16 08:46:44.192706 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:46:44.192727 | orchestrator | Thursday 16 April 2026 08:46:44 +0000 (0:00:02.017) 1:00:50.935 ******** 2026-04-16 08:47:33.793178 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793296 | orchestrator | 2026-04-16 08:47:33.793314 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:47:33.793327 | orchestrator | Thursday 16 April 2026 08:46:45 +0000 (0:00:01.128) 1:00:52.064 ******** 2026-04-16 08:47:33.793338 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793349 | orchestrator | 2026-04-16 08:47:33.793361 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:47:33.793372 | orchestrator | Thursday 16 April 2026 08:46:46 +0000 (0:00:01.093) 1:00:53.158 ******** 2026-04-16 08:47:33.793382 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793393 | orchestrator | 2026-04-16 08:47:33.793405 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:47:33.793420 | orchestrator | Thursday 16 April 2026 08:46:47 +0000 (0:00:01.096) 1:00:54.255 ******** 2026-04-16 08:47:33.793439 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793457 | orchestrator | 2026-04-16 08:47:33.793475 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:47:33.793495 | orchestrator | Thursday 16 April 2026 08:46:48 +0000 (0:00:01.118) 1:00:55.374 ******** 2026-04-16 08:47:33.793514 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793533 | orchestrator | 2026-04-16 08:47:33.793552 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:47:33.793573 | orchestrator | Thursday 16 April 2026 08:46:49 +0000 (0:00:01.102) 1:00:56.476 ******** 2026-04-16 08:47:33.793592 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793611 | orchestrator | 2026-04-16 08:47:33.793629 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:47:33.793650 | orchestrator | Thursday 16 April 2026 08:46:50 +0000 (0:00:01.140) 1:00:57.617 ******** 2026-04-16 08:47:33.793672 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793695 | orchestrator | 2026-04-16 08:47:33.793716 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:47:33.793767 | orchestrator | Thursday 16 April 2026 08:46:51 +0000 (0:00:01.103) 1:00:58.721 ******** 2026-04-16 08:47:33.793790 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793812 | orchestrator | 2026-04-16 08:47:33.793833 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:47:33.793854 | orchestrator | Thursday 16 April 2026 08:46:53 +0000 (0:00:01.125) 1:00:59.846 ******** 2026-04-16 08:47:33.793873 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793893 | orchestrator | 2026-04-16 08:47:33.793912 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:47:33.793931 | orchestrator | Thursday 16 April 2026 08:46:54 +0000 (0:00:01.116) 1:01:00.963 ******** 2026-04-16 08:47:33.793949 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.793969 | orchestrator | 2026-04-16 08:47:33.794006 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:47:33.794155 | orchestrator | Thursday 16 April 2026 08:46:55 +0000 (0:00:01.163) 1:01:02.126 ******** 2026-04-16 08:47:33.794178 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.794196 | orchestrator | 2026-04-16 08:47:33.794216 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:47:33.794236 | orchestrator | Thursday 16 April 2026 08:46:56 +0000 (0:00:01.119) 1:01:03.245 ******** 2026-04-16 08:47:33.794256 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:47:33.794276 | orchestrator | 2026-04-16 08:47:33.794296 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:47:33.794316 | orchestrator | Thursday 16 April 2026 08:47:00 +0000 (0:00:04.505) 1:01:07.751 ******** 2026-04-16 08:47:33.794336 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:47:33.794356 | orchestrator | 2026-04-16 08:47:33.794377 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:47:33.794397 | orchestrator | Thursday 16 April 2026 08:47:02 +0000 (0:00:01.154) 1:01:08.906 ******** 2026-04-16 08:47:33.794420 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-04-16 08:47:33.794444 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-04-16 08:47:33.794466 | orchestrator | 2026-04-16 08:47:33.794486 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:47:33.794506 | orchestrator | Thursday 16 April 2026 08:47:07 +0000 (0:00:04.869) 1:01:13.776 ******** 2026-04-16 08:47:33.794526 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.794545 | orchestrator | 2026-04-16 08:47:33.794564 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:47:33.794585 | orchestrator | Thursday 16 April 2026 08:47:08 +0000 (0:00:01.140) 1:01:14.916 ******** 2026-04-16 08:47:33.794606 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.794626 | orchestrator | 2026-04-16 08:47:33.794646 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:47:33.794695 | orchestrator | Thursday 16 April 2026 08:47:09 +0000 (0:00:01.111) 1:01:16.027 ******** 2026-04-16 08:47:33.794717 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.794737 | orchestrator | 2026-04-16 08:47:33.794756 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:47:33.794792 | orchestrator | Thursday 16 April 2026 08:47:10 +0000 (0:00:01.148) 1:01:17.176 ******** 2026-04-16 08:47:33.794810 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.794829 | orchestrator | 2026-04-16 08:47:33.794845 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:47:33.794863 | orchestrator | Thursday 16 April 2026 08:47:11 +0000 (0:00:01.149) 1:01:18.325 ******** 2026-04-16 08:47:33.794881 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.794899 | orchestrator | 2026-04-16 08:47:33.794916 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:47:33.794934 | orchestrator | Thursday 16 April 2026 08:47:12 +0000 (0:00:01.148) 1:01:19.474 ******** 2026-04-16 08:47:33.794950 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:47:33.794968 | orchestrator | 2026-04-16 08:47:33.794985 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:47:33.795000 | orchestrator | Thursday 16 April 2026 08:47:13 +0000 (0:00:01.242) 1:01:20.717 ******** 2026-04-16 08:47:33.795017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:47:33.795034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:47:33.795084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:47:33.795100 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.795116 | orchestrator | 2026-04-16 08:47:33.795133 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:47:33.795150 | orchestrator | Thursday 16 April 2026 08:47:15 +0000 (0:00:01.820) 1:01:22.537 ******** 2026-04-16 08:47:33.795167 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:47:33.795183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:47:33.795199 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:47:33.795216 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.795233 | orchestrator | 2026-04-16 08:47:33.795249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:47:33.795266 | orchestrator | Thursday 16 April 2026 08:47:17 +0000 (0:00:01.753) 1:01:24.291 ******** 2026-04-16 08:47:33.795282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-04-16 08:47:33.795299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-04-16 08:47:33.795316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-04-16 08:47:33.795332 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.795349 | orchestrator | 2026-04-16 08:47:33.795365 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:47:33.795393 | orchestrator | Thursday 16 April 2026 08:47:19 +0000 (0:00:01.817) 1:01:26.109 ******** 2026-04-16 08:47:33.795410 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:47:33.795427 | orchestrator | 2026-04-16 08:47:33.795443 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:47:33.795462 | orchestrator | Thursday 16 April 2026 08:47:20 +0000 (0:00:01.170) 1:01:27.280 ******** 2026-04-16 08:47:33.795479 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-04-16 08:47:33.795497 | orchestrator | 2026-04-16 08:47:33.795515 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:47:33.795532 | orchestrator | Thursday 16 April 2026 08:47:21 +0000 (0:00:01.314) 1:01:28.594 ******** 2026-04-16 08:47:33.795549 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:47:33.795566 | orchestrator | 2026-04-16 08:47:33.795584 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-16 08:47:33.795603 | orchestrator | Thursday 16 April 2026 08:47:23 +0000 (0:00:01.717) 1:01:30.312 ******** 2026-04-16 08:47:33.795622 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-04-16 08:47:33.795642 | orchestrator | 2026-04-16 08:47:33.795662 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 08:47:33.795681 | orchestrator | Thursday 16 April 2026 08:47:25 +0000 (0:00:01.470) 1:01:31.782 ******** 2026-04-16 08:47:33.795715 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:47:33.795734 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 08:47:33.795753 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:47:33.795771 | orchestrator | 2026-04-16 08:47:33.795789 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:47:33.795807 | orchestrator | Thursday 16 April 2026 08:47:28 +0000 (0:00:03.291) 1:01:35.074 ******** 2026-04-16 08:47:33.795825 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 08:47:33.795843 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-04-16 08:47:33.795860 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:47:33.795871 | orchestrator | 2026-04-16 08:47:33.795882 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-16 08:47:33.795893 | orchestrator | Thursday 16 April 2026 08:47:30 +0000 (0:00:02.018) 1:01:37.093 ******** 2026-04-16 08:47:33.795903 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:47:33.795914 | orchestrator | 2026-04-16 08:47:33.795924 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-16 08:47:33.795935 | orchestrator | Thursday 16 April 2026 08:47:31 +0000 (0:00:01.124) 1:01:38.217 ******** 2026-04-16 08:47:33.795946 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-04-16 08:47:33.795957 | orchestrator | 2026-04-16 08:47:33.795968 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-16 08:47:33.795978 | orchestrator | Thursday 16 April 2026 08:47:32 +0000 (0:00:01.457) 1:01:39.675 ******** 2026-04-16 08:47:33.796005 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:48:48.905040 | orchestrator | 2026-04-16 08:48:48.905175 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-16 08:48:48.905187 | orchestrator | Thursday 16 April 2026 08:47:34 +0000 (0:00:01.946) 1:01:41.622 ******** 2026-04-16 08:48:48.905195 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:48:48.905205 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-16 08:48:48.905213 | orchestrator | 2026-04-16 08:48:48.905220 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 08:48:48.905228 | orchestrator | Thursday 16 April 2026 08:47:40 +0000 (0:00:05.270) 1:01:46.892 ******** 2026-04-16 08:48:48.905235 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:48:48.905244 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:48:48.905251 | orchestrator | 2026-04-16 08:48:48.905258 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:48:48.905265 | orchestrator | Thursday 16 April 2026 08:47:43 +0000 (0:00:03.166) 1:01:50.058 ******** 2026-04-16 08:48:48.905273 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 08:48:48.905280 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:48:48.905289 | orchestrator | 2026-04-16 08:48:48.905296 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-16 08:48:48.905303 | orchestrator | Thursday 16 April 2026 08:47:45 +0000 (0:00:02.059) 1:01:52.118 ******** 2026-04-16 08:48:48.905311 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-04-16 08:48:48.905318 | orchestrator | 2026-04-16 08:48:48.905325 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-16 08:48:48.905332 | orchestrator | Thursday 16 April 2026 08:47:46 +0000 (0:00:01.457) 1:01:53.575 ******** 2026-04-16 08:48:48.905339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905408 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:48:48.905415 | orchestrator | 2026-04-16 08:48:48.905423 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-16 08:48:48.905430 | orchestrator | Thursday 16 April 2026 08:47:48 +0000 (0:00:01.575) 1:01:55.151 ******** 2026-04-16 08:48:48.905437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:48:48.905473 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:48:48.905480 | orchestrator | 2026-04-16 08:48:48.905487 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-16 08:48:48.905495 | orchestrator | Thursday 16 April 2026 08:47:49 +0000 (0:00:01.556) 1:01:56.708 ******** 2026-04-16 08:48:48.905502 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:48:48.905512 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:48:48.905519 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:48:48.905527 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:48:48.905535 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:48:48.905543 | orchestrator | 2026-04-16 08:48:48.905550 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-16 08:48:48.905571 | orchestrator | Thursday 16 April 2026 08:48:22 +0000 (0:00:32.128) 1:02:28.837 ******** 2026-04-16 08:48:48.905579 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:48:48.905586 | orchestrator | 2026-04-16 08:48:48.905593 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-16 08:48:48.905601 | orchestrator | Thursday 16 April 2026 08:48:23 +0000 (0:00:01.146) 1:02:29.983 ******** 2026-04-16 08:48:48.905609 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:48:48.905618 | orchestrator | 2026-04-16 08:48:48.905626 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-16 08:48:48.905634 | orchestrator | Thursday 16 April 2026 08:48:24 +0000 (0:00:01.106) 1:02:31.090 ******** 2026-04-16 08:48:48.905642 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-04-16 08:48:48.905656 | orchestrator | 2026-04-16 08:48:48.905665 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-16 08:48:48.905673 | orchestrator | Thursday 16 April 2026 08:48:25 +0000 (0:00:01.482) 1:02:32.572 ******** 2026-04-16 08:48:48.905681 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-04-16 08:48:48.905689 | orchestrator | 2026-04-16 08:48:48.905698 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-16 08:48:48.905706 | orchestrator | Thursday 16 April 2026 08:48:27 +0000 (0:00:01.517) 1:02:34.090 ******** 2026-04-16 08:48:48.905714 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:48:48.905722 | orchestrator | 2026-04-16 08:48:48.905730 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-16 08:48:48.905738 | orchestrator | Thursday 16 April 2026 08:48:29 +0000 (0:00:02.043) 1:02:36.133 ******** 2026-04-16 08:48:48.905747 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:48:48.905755 | orchestrator | 2026-04-16 08:48:48.905763 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-16 08:48:48.905771 | orchestrator | Thursday 16 April 2026 08:48:31 +0000 (0:00:01.979) 1:02:38.113 ******** 2026-04-16 08:48:48.905779 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:48:48.905787 | orchestrator | 2026-04-16 08:48:48.905795 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-16 08:48:48.905804 | orchestrator | Thursday 16 April 2026 08:48:33 +0000 (0:00:02.201) 1:02:40.315 ******** 2026-04-16 08:48:48.905812 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-04-16 08:48:48.905819 | orchestrator | 2026-04-16 08:48:48.905826 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-16 08:48:48.905834 | orchestrator | 2026-04-16 08:48:48.905841 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:48:48.905848 | orchestrator | Thursday 16 April 2026 08:48:36 +0000 (0:00:02.733) 1:02:43.048 ******** 2026-04-16 08:48:48.905859 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-04-16 08:48:48.905866 | orchestrator | 2026-04-16 08:48:48.905873 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:48:48.905881 | orchestrator | Thursday 16 April 2026 08:48:37 +0000 (0:00:01.074) 1:02:44.123 ******** 2026-04-16 08:48:48.905888 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.905895 | orchestrator | 2026-04-16 08:48:48.905902 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:48:48.905909 | orchestrator | Thursday 16 April 2026 08:48:38 +0000 (0:00:01.491) 1:02:45.614 ******** 2026-04-16 08:48:48.905916 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.905923 | orchestrator | 2026-04-16 08:48:48.905931 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:48:48.905938 | orchestrator | Thursday 16 April 2026 08:48:39 +0000 (0:00:01.100) 1:02:46.715 ******** 2026-04-16 08:48:48.905945 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.905952 | orchestrator | 2026-04-16 08:48:48.905959 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:48:48.905966 | orchestrator | Thursday 16 April 2026 08:48:41 +0000 (0:00:01.504) 1:02:48.219 ******** 2026-04-16 08:48:48.905974 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.905981 | orchestrator | 2026-04-16 08:48:48.905988 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:48:48.905995 | orchestrator | Thursday 16 April 2026 08:48:42 +0000 (0:00:01.166) 1:02:49.386 ******** 2026-04-16 08:48:48.906002 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.906009 | orchestrator | 2026-04-16 08:48:48.906060 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:48:48.906086 | orchestrator | Thursday 16 April 2026 08:48:43 +0000 (0:00:01.200) 1:02:50.586 ******** 2026-04-16 08:48:48.906094 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.906106 | orchestrator | 2026-04-16 08:48:48.906114 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:48:48.906121 | orchestrator | Thursday 16 April 2026 08:48:44 +0000 (0:00:01.157) 1:02:51.744 ******** 2026-04-16 08:48:48.906128 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:48:48.906135 | orchestrator | 2026-04-16 08:48:48.906143 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:48:48.906150 | orchestrator | Thursday 16 April 2026 08:48:46 +0000 (0:00:01.156) 1:02:52.901 ******** 2026-04-16 08:48:48.906157 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:48:48.906165 | orchestrator | 2026-04-16 08:48:48.906172 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:48:48.906179 | orchestrator | Thursday 16 April 2026 08:48:47 +0000 (0:00:01.093) 1:02:53.995 ******** 2026-04-16 08:48:48.906187 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:48:48.906194 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:48:48.906201 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:48:48.906208 | orchestrator | 2026-04-16 08:48:48.906216 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:48:48.906228 | orchestrator | Thursday 16 April 2026 08:48:48 +0000 (0:00:01.656) 1:02:55.651 ******** 2026-04-16 08:49:13.711824 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:13.711963 | orchestrator | 2026-04-16 08:49:13.711993 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:49:13.712016 | orchestrator | Thursday 16 April 2026 08:48:50 +0000 (0:00:01.209) 1:02:56.861 ******** 2026-04-16 08:49:13.712037 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:49:13.712058 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:49:13.712131 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:49:13.712154 | orchestrator | 2026-04-16 08:49:13.712174 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:49:13.712193 | orchestrator | Thursday 16 April 2026 08:48:53 +0000 (0:00:02.945) 1:02:59.806 ******** 2026-04-16 08:49:13.712210 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 08:49:13.712222 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 08:49:13.712233 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 08:49:13.712244 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.712255 | orchestrator | 2026-04-16 08:49:13.712266 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:49:13.712277 | orchestrator | Thursday 16 April 2026 08:48:54 +0000 (0:00:01.373) 1:03:01.180 ******** 2026-04-16 08:49:13.712290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:49:13.712304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:49:13.712316 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:49:13.712327 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.712341 | orchestrator | 2026-04-16 08:49:13.712353 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:49:13.712382 | orchestrator | Thursday 16 April 2026 08:48:56 +0000 (0:00:01.939) 1:03:03.119 ******** 2026-04-16 08:49:13.712423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:13.712440 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:13.712455 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:13.712475 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.712504 | orchestrator | 2026-04-16 08:49:13.712525 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:49:13.712544 | orchestrator | Thursday 16 April 2026 08:48:57 +0000 (0:00:01.146) 1:03:04.265 ******** 2026-04-16 08:49:13.712592 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:48:50.628850', 'end': '2026-04-16 08:48:50.688647', 'delta': '0:00:00.059797', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:49:13.712617 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:48:51.249342', 'end': '2026-04-16 08:48:51.299277', 'delta': '0:00:00.049935', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:49:13.712640 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:48:51.820978', 'end': '2026-04-16 08:48:51.874129', 'delta': '0:00:00.053151', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:49:13.712676 | orchestrator | 2026-04-16 08:49:13.712692 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:49:13.712705 | orchestrator | Thursday 16 April 2026 08:48:58 +0000 (0:00:01.179) 1:03:05.444 ******** 2026-04-16 08:49:13.712717 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:13.712728 | orchestrator | 2026-04-16 08:49:13.712739 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:49:13.712756 | orchestrator | Thursday 16 April 2026 08:48:59 +0000 (0:00:01.239) 1:03:06.684 ******** 2026-04-16 08:49:13.712768 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.712779 | orchestrator | 2026-04-16 08:49:13.712789 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:49:13.712800 | orchestrator | Thursday 16 April 2026 08:49:01 +0000 (0:00:01.633) 1:03:08.317 ******** 2026-04-16 08:49:13.712811 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:13.712822 | orchestrator | 2026-04-16 08:49:13.712833 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:49:13.712843 | orchestrator | Thursday 16 April 2026 08:49:02 +0000 (0:00:01.162) 1:03:09.479 ******** 2026-04-16 08:49:13.712854 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:49:13.712865 | orchestrator | 2026-04-16 08:49:13.712876 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:49:13.712887 | orchestrator | Thursday 16 April 2026 08:49:04 +0000 (0:00:01.919) 1:03:11.398 ******** 2026-04-16 08:49:13.712898 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:13.712909 | orchestrator | 2026-04-16 08:49:13.712919 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:49:13.712930 | orchestrator | Thursday 16 April 2026 08:49:05 +0000 (0:00:01.120) 1:03:12.519 ******** 2026-04-16 08:49:13.712941 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.712952 | orchestrator | 2026-04-16 08:49:13.712962 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:49:13.712973 | orchestrator | Thursday 16 April 2026 08:49:06 +0000 (0:00:01.105) 1:03:13.624 ******** 2026-04-16 08:49:13.712990 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.713017 | orchestrator | 2026-04-16 08:49:13.713039 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:49:13.713056 | orchestrator | Thursday 16 April 2026 08:49:08 +0000 (0:00:01.197) 1:03:14.822 ******** 2026-04-16 08:49:13.713101 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.713120 | orchestrator | 2026-04-16 08:49:13.713139 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:49:13.713158 | orchestrator | Thursday 16 April 2026 08:49:09 +0000 (0:00:01.083) 1:03:15.906 ******** 2026-04-16 08:49:13.713211 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.713229 | orchestrator | 2026-04-16 08:49:13.713246 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:49:13.713257 | orchestrator | Thursday 16 April 2026 08:49:10 +0000 (0:00:01.128) 1:03:17.035 ******** 2026-04-16 08:49:13.713268 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:13.713279 | orchestrator | 2026-04-16 08:49:13.713290 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:49:13.713301 | orchestrator | Thursday 16 April 2026 08:49:11 +0000 (0:00:01.148) 1:03:18.184 ******** 2026-04-16 08:49:13.713311 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:13.713322 | orchestrator | 2026-04-16 08:49:13.713333 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:49:13.713345 | orchestrator | Thursday 16 April 2026 08:49:12 +0000 (0:00:01.111) 1:03:19.295 ******** 2026-04-16 08:49:13.713356 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:13.713366 | orchestrator | 2026-04-16 08:49:13.713378 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:49:13.713400 | orchestrator | Thursday 16 April 2026 08:49:13 +0000 (0:00:01.160) 1:03:20.455 ******** 2026-04-16 08:49:16.226832 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:16.226970 | orchestrator | 2026-04-16 08:49:16.226985 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:49:16.226994 | orchestrator | Thursday 16 April 2026 08:49:14 +0000 (0:00:01.128) 1:03:21.584 ******** 2026-04-16 08:49:16.227003 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:16.227011 | orchestrator | 2026-04-16 08:49:16.227018 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:49:16.227026 | orchestrator | Thursday 16 April 2026 08:49:15 +0000 (0:00:01.155) 1:03:22.740 ******** 2026-04-16 08:49:16.227036 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}})  2026-04-16 08:49:16.227140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:49:16.227163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}})  2026-04-16 08:49:16.227177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:49:16.227260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227292 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}})  2026-04-16 08:49:16.227324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}})  2026-04-16 08:49:16.227335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:16.227375 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:49:17.866179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:17.866287 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:49:17.866303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:49:17.866317 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:17.866329 | orchestrator | 2026-04-16 08:49:17.866340 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:49:17.866351 | orchestrator | Thursday 16 April 2026 08:49:17 +0000 (0:00:01.329) 1:03:24.070 ******** 2026-04-16 08:49:17.866386 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866399 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6', 'dm-uuid-LVM-P3f7yLRTpIEb5YiFvJru8S9wxr4ezjx74DXnD3IoPILszkTjBfjVMj0iUpgNvVbJ'], 'uuids': ['9905a9af-5b37-4391-814a-1d841c43042d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866411 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99', 'scsi-SQEMU_QEMU_HARDDISK_5b9c3369-0440-4506-af4c-01bb913afd99'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b9c3369', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866446 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-fZdCYi-BDU3-F9nH-eb2u-TA7J-O9Ud-bTDT7j', 'scsi-0QEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13', 'scsi-SQEMU_QEMU_HARDDISK_ad98f1c3-bcf7-4daa-8620-21ecec1aea13'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866460 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866490 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866503 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-50-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866520 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:17.866553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe', 'dm-uuid-CRYPT-LUKS2-b9f9d92dbf144b5c8478da6b09002f8e-XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213561 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7b8b78e2--2212--5c47--abe3--ec23a1e6354f-osd--block--7b8b78e2--2212--5c47--abe3--ec23a1e6354f', 'dm-uuid-LVM-3I8wgkGTzP7ya6M4XSVB3RD4g3AF12IoXuoOsqEMAyKATZGAMaeSanIe0YiHIZQe'], 'uuids': ['b9f9d92d-bf14-4b5c-8478-da6b09002f8e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'ad98f1c3', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['XuoOsq-EMAy-KATZ-GAMa-eSan-Ie0Y-iHIZQe']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-cwAFjK-30da-efSc-DHwe-LECR-Mt1o-5veISd', 'scsi-0QEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3', 'scsi-SQEMU_QEMU_HARDDISK_6e9659e4-3cc7-4909-ad5f-d807239f86c3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6e9659e4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--280a11fd--e83f--54f4--b253--754709c5cdf6-osd--block--280a11fd--e83f--54f4--b253--754709c5cdf6']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213625 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213676 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7032e080', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1', 'scsi-SQEMU_QEMU_HARDDISK_7032e080-debe-4ddb-9f2d-e4e5a5f8dba8-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213700 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213725 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ', 'dm-uuid-CRYPT-LUKS2-9905a9af5b374391814a1d841c43042d-4DXnD3-IoPI-Lszk-TjBf-jVMj-0iUp-gNvVbJ'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:49:23.213737 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:49:23.213751 | orchestrator | 2026-04-16 08:49:23.213763 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:49:23.213775 | orchestrator | Thursday 16 April 2026 08:49:19 +0000 (0:00:01.836) 1:03:25.906 ******** 2026-04-16 08:49:23.213787 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:23.213799 | orchestrator | 2026-04-16 08:49:23.213810 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:49:23.213821 | orchestrator | Thursday 16 April 2026 08:49:20 +0000 (0:00:01.478) 1:03:27.385 ******** 2026-04-16 08:49:23.213832 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:23.213843 | orchestrator | 2026-04-16 08:49:23.213854 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:49:23.213864 | orchestrator | Thursday 16 April 2026 08:49:21 +0000 (0:00:01.113) 1:03:28.498 ******** 2026-04-16 08:49:23.213875 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:49:23.213886 | orchestrator | 2026-04-16 08:49:23.213897 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:49:23.213919 | orchestrator | Thursday 16 April 2026 08:49:23 +0000 (0:00:01.466) 1:03:29.965 ******** 2026-04-16 08:50:04.368182 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368301 | orchestrator | 2026-04-16 08:50:04.368319 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:50:04.368333 | orchestrator | Thursday 16 April 2026 08:49:24 +0000 (0:00:01.143) 1:03:31.108 ******** 2026-04-16 08:50:04.368344 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368355 | orchestrator | 2026-04-16 08:50:04.368367 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:50:04.368402 | orchestrator | Thursday 16 April 2026 08:49:25 +0000 (0:00:01.237) 1:03:32.346 ******** 2026-04-16 08:50:04.368431 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368442 | orchestrator | 2026-04-16 08:50:04.368454 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:50:04.368464 | orchestrator | Thursday 16 April 2026 08:49:26 +0000 (0:00:01.134) 1:03:33.480 ******** 2026-04-16 08:50:04.368487 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-04-16 08:50:04.368499 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-04-16 08:50:04.368510 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-04-16 08:50:04.368521 | orchestrator | 2026-04-16 08:50:04.368531 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:50:04.368542 | orchestrator | Thursday 16 April 2026 08:49:28 +0000 (0:00:01.656) 1:03:35.137 ******** 2026-04-16 08:50:04.368553 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-04-16 08:50:04.368564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-04-16 08:50:04.368576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-04-16 08:50:04.368587 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368598 | orchestrator | 2026-04-16 08:50:04.368609 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:50:04.368620 | orchestrator | Thursday 16 April 2026 08:49:29 +0000 (0:00:01.134) 1:03:36.271 ******** 2026-04-16 08:50:04.368631 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-04-16 08:50:04.368642 | orchestrator | 2026-04-16 08:50:04.368654 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:50:04.368666 | orchestrator | Thursday 16 April 2026 08:49:30 +0000 (0:00:01.125) 1:03:37.397 ******** 2026-04-16 08:50:04.368677 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368690 | orchestrator | 2026-04-16 08:50:04.368703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:50:04.368716 | orchestrator | Thursday 16 April 2026 08:49:31 +0000 (0:00:01.139) 1:03:38.536 ******** 2026-04-16 08:50:04.368728 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368740 | orchestrator | 2026-04-16 08:50:04.368753 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:50:04.368766 | orchestrator | Thursday 16 April 2026 08:49:32 +0000 (0:00:01.144) 1:03:39.681 ******** 2026-04-16 08:50:04.368777 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368788 | orchestrator | 2026-04-16 08:50:04.368799 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:50:04.368810 | orchestrator | Thursday 16 April 2026 08:49:34 +0000 (0:00:01.155) 1:03:40.837 ******** 2026-04-16 08:50:04.368821 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:04.368832 | orchestrator | 2026-04-16 08:50:04.368843 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:50:04.368854 | orchestrator | Thursday 16 April 2026 08:49:35 +0000 (0:00:01.204) 1:03:42.041 ******** 2026-04-16 08:50:04.368864 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:50:04.368875 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:50:04.368886 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:50:04.368897 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368908 | orchestrator | 2026-04-16 08:50:04.368919 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:50:04.368930 | orchestrator | Thursday 16 April 2026 08:49:36 +0000 (0:00:01.370) 1:03:43.412 ******** 2026-04-16 08:50:04.368940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:50:04.368951 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:50:04.368962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:50:04.368981 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.368992 | orchestrator | 2026-04-16 08:50:04.369003 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:50:04.369014 | orchestrator | Thursday 16 April 2026 08:49:38 +0000 (0:00:01.362) 1:03:44.774 ******** 2026-04-16 08:50:04.369025 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:50:04.369035 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:50:04.369046 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:50:04.369057 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.369068 | orchestrator | 2026-04-16 08:50:04.369079 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:50:04.369116 | orchestrator | Thursday 16 April 2026 08:49:39 +0000 (0:00:01.346) 1:03:46.120 ******** 2026-04-16 08:50:04.369129 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:04.369140 | orchestrator | 2026-04-16 08:50:04.369150 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:50:04.369161 | orchestrator | Thursday 16 April 2026 08:49:40 +0000 (0:00:01.131) 1:03:47.252 ******** 2026-04-16 08:50:04.369172 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 08:50:04.369183 | orchestrator | 2026-04-16 08:50:04.369194 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:50:04.369205 | orchestrator | Thursday 16 April 2026 08:49:41 +0000 (0:00:01.315) 1:03:48.567 ******** 2026-04-16 08:50:04.369250 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:50:04.369262 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:50:04.369274 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:50:04.369285 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:50:04.369296 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-16 08:50:04.369306 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:50:04.369317 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:50:04.369328 | orchestrator | 2026-04-16 08:50:04.369339 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:50:04.369350 | orchestrator | Thursday 16 April 2026 08:49:43 +0000 (0:00:02.157) 1:03:50.725 ******** 2026-04-16 08:50:04.369361 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:50:04.369371 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:50:04.369382 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:50:04.369393 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:50:04.369404 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-04-16 08:50:04.369415 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-04-16 08:50:04.369425 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:50:04.369436 | orchestrator | 2026-04-16 08:50:04.369447 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-16 08:50:04.369458 | orchestrator | Thursday 16 April 2026 08:49:46 +0000 (0:00:02.294) 1:03:53.019 ******** 2026-04-16 08:50:04.369468 | orchestrator | changed: [testbed-node-4] 2026-04-16 08:50:04.369479 | orchestrator | 2026-04-16 08:50:04.369490 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-16 08:50:04.369501 | orchestrator | Thursday 16 April 2026 08:49:48 +0000 (0:00:02.037) 1:03:55.057 ******** 2026-04-16 08:50:04.369512 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:50:04.369530 | orchestrator | 2026-04-16 08:50:04.369541 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-16 08:50:04.369551 | orchestrator | Thursday 16 April 2026 08:49:50 +0000 (0:00:02.553) 1:03:57.611 ******** 2026-04-16 08:50:04.369563 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:50:04.369573 | orchestrator | 2026-04-16 08:50:04.369585 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:50:04.369595 | orchestrator | Thursday 16 April 2026 08:49:52 +0000 (0:00:02.011) 1:03:59.622 ******** 2026-04-16 08:50:04.369606 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-04-16 08:50:04.369618 | orchestrator | 2026-04-16 08:50:04.369628 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:50:04.369639 | orchestrator | Thursday 16 April 2026 08:49:54 +0000 (0:00:01.195) 1:04:00.818 ******** 2026-04-16 08:50:04.369650 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-04-16 08:50:04.369661 | orchestrator | 2026-04-16 08:50:04.369672 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:50:04.369683 | orchestrator | Thursday 16 April 2026 08:49:55 +0000 (0:00:01.118) 1:04:01.937 ******** 2026-04-16 08:50:04.369694 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.369705 | orchestrator | 2026-04-16 08:50:04.369716 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:50:04.369726 | orchestrator | Thursday 16 April 2026 08:49:56 +0000 (0:00:01.115) 1:04:03.052 ******** 2026-04-16 08:50:04.369737 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:04.369748 | orchestrator | 2026-04-16 08:50:04.369759 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:50:04.369770 | orchestrator | Thursday 16 April 2026 08:49:57 +0000 (0:00:01.566) 1:04:04.619 ******** 2026-04-16 08:50:04.369781 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:04.369791 | orchestrator | 2026-04-16 08:50:04.369808 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:50:04.369826 | orchestrator | Thursday 16 April 2026 08:49:59 +0000 (0:00:01.542) 1:04:06.161 ******** 2026-04-16 08:50:04.369843 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:04.369861 | orchestrator | 2026-04-16 08:50:04.369880 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:50:04.369898 | orchestrator | Thursday 16 April 2026 08:50:00 +0000 (0:00:01.549) 1:04:07.711 ******** 2026-04-16 08:50:04.369918 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.369938 | orchestrator | 2026-04-16 08:50:04.369957 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:50:04.369975 | orchestrator | Thursday 16 April 2026 08:50:02 +0000 (0:00:01.104) 1:04:08.815 ******** 2026-04-16 08:50:04.369994 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.370013 | orchestrator | 2026-04-16 08:50:04.370165 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:50:04.370177 | orchestrator | Thursday 16 April 2026 08:50:03 +0000 (0:00:01.184) 1:04:10.000 ******** 2026-04-16 08:50:04.370188 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:04.370199 | orchestrator | 2026-04-16 08:50:04.370218 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:50:04.370242 | orchestrator | Thursday 16 April 2026 08:50:04 +0000 (0:00:01.115) 1:04:11.116 ******** 2026-04-16 08:50:42.861522 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.861644 | orchestrator | 2026-04-16 08:50:42.861661 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:50:42.861674 | orchestrator | Thursday 16 April 2026 08:50:05 +0000 (0:00:01.576) 1:04:12.692 ******** 2026-04-16 08:50:42.861685 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.861696 | orchestrator | 2026-04-16 08:50:42.861733 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:50:42.861746 | orchestrator | Thursday 16 April 2026 08:50:07 +0000 (0:00:01.520) 1:04:14.212 ******** 2026-04-16 08:50:42.861765 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.861786 | orchestrator | 2026-04-16 08:50:42.861805 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:50:42.861825 | orchestrator | Thursday 16 April 2026 08:50:08 +0000 (0:00:00.762) 1:04:14.975 ******** 2026-04-16 08:50:42.861845 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.861864 | orchestrator | 2026-04-16 08:50:42.861882 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:50:42.861901 | orchestrator | Thursday 16 April 2026 08:50:08 +0000 (0:00:00.746) 1:04:15.722 ******** 2026-04-16 08:50:42.861920 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.861939 | orchestrator | 2026-04-16 08:50:42.861957 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:50:42.861974 | orchestrator | Thursday 16 April 2026 08:50:09 +0000 (0:00:00.765) 1:04:16.488 ******** 2026-04-16 08:50:42.861993 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.862094 | orchestrator | 2026-04-16 08:50:42.862155 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:50:42.862176 | orchestrator | Thursday 16 April 2026 08:50:10 +0000 (0:00:00.779) 1:04:17.268 ******** 2026-04-16 08:50:42.862195 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.862217 | orchestrator | 2026-04-16 08:50:42.862242 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:50:42.862260 | orchestrator | Thursday 16 April 2026 08:50:11 +0000 (0:00:00.790) 1:04:18.058 ******** 2026-04-16 08:50:42.862278 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862296 | orchestrator | 2026-04-16 08:50:42.862312 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:50:42.862328 | orchestrator | Thursday 16 April 2026 08:50:12 +0000 (0:00:00.748) 1:04:18.807 ******** 2026-04-16 08:50:42.862346 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862364 | orchestrator | 2026-04-16 08:50:42.862382 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:50:42.862401 | orchestrator | Thursday 16 April 2026 08:50:12 +0000 (0:00:00.752) 1:04:19.559 ******** 2026-04-16 08:50:42.862420 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862438 | orchestrator | 2026-04-16 08:50:42.862457 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:50:42.862476 | orchestrator | Thursday 16 April 2026 08:50:13 +0000 (0:00:00.780) 1:04:20.340 ******** 2026-04-16 08:50:42.862495 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.862514 | orchestrator | 2026-04-16 08:50:42.862532 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:50:42.862543 | orchestrator | Thursday 16 April 2026 08:50:14 +0000 (0:00:00.793) 1:04:21.133 ******** 2026-04-16 08:50:42.862554 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.862565 | orchestrator | 2026-04-16 08:50:42.862576 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:50:42.862588 | orchestrator | Thursday 16 April 2026 08:50:15 +0000 (0:00:00.811) 1:04:21.945 ******** 2026-04-16 08:50:42.862598 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862609 | orchestrator | 2026-04-16 08:50:42.862619 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:50:42.862630 | orchestrator | Thursday 16 April 2026 08:50:15 +0000 (0:00:00.778) 1:04:22.723 ******** 2026-04-16 08:50:42.862641 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862651 | orchestrator | 2026-04-16 08:50:42.862662 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:50:42.862672 | orchestrator | Thursday 16 April 2026 08:50:16 +0000 (0:00:00.761) 1:04:23.485 ******** 2026-04-16 08:50:42.862683 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862693 | orchestrator | 2026-04-16 08:50:42.862720 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:50:42.862731 | orchestrator | Thursday 16 April 2026 08:50:17 +0000 (0:00:00.790) 1:04:24.276 ******** 2026-04-16 08:50:42.862746 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862764 | orchestrator | 2026-04-16 08:50:42.862785 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:50:42.862812 | orchestrator | Thursday 16 April 2026 08:50:18 +0000 (0:00:00.752) 1:04:25.029 ******** 2026-04-16 08:50:42.862830 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862847 | orchestrator | 2026-04-16 08:50:42.862864 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:50:42.862882 | orchestrator | Thursday 16 April 2026 08:50:19 +0000 (0:00:00.745) 1:04:25.774 ******** 2026-04-16 08:50:42.862900 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862918 | orchestrator | 2026-04-16 08:50:42.862936 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:50:42.862954 | orchestrator | Thursday 16 April 2026 08:50:19 +0000 (0:00:00.741) 1:04:26.515 ******** 2026-04-16 08:50:42.862971 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.862990 | orchestrator | 2026-04-16 08:50:42.863008 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:50:42.863028 | orchestrator | Thursday 16 April 2026 08:50:20 +0000 (0:00:00.790) 1:04:27.306 ******** 2026-04-16 08:50:42.863045 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863062 | orchestrator | 2026-04-16 08:50:42.863073 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:50:42.863128 | orchestrator | Thursday 16 April 2026 08:50:21 +0000 (0:00:00.780) 1:04:28.086 ******** 2026-04-16 08:50:42.863147 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863158 | orchestrator | 2026-04-16 08:50:42.863193 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:50:42.863205 | orchestrator | Thursday 16 April 2026 08:50:22 +0000 (0:00:00.752) 1:04:28.838 ******** 2026-04-16 08:50:42.863216 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863227 | orchestrator | 2026-04-16 08:50:42.863238 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:50:42.863248 | orchestrator | Thursday 16 April 2026 08:50:22 +0000 (0:00:00.756) 1:04:29.595 ******** 2026-04-16 08:50:42.863259 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863270 | orchestrator | 2026-04-16 08:50:42.863281 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:50:42.863291 | orchestrator | Thursday 16 April 2026 08:50:23 +0000 (0:00:00.801) 1:04:30.397 ******** 2026-04-16 08:50:42.863302 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863313 | orchestrator | 2026-04-16 08:50:42.863324 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:50:42.863334 | orchestrator | Thursday 16 April 2026 08:50:24 +0000 (0:00:00.767) 1:04:31.164 ******** 2026-04-16 08:50:42.863406 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.863418 | orchestrator | 2026-04-16 08:50:42.863429 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:50:42.863440 | orchestrator | Thursday 16 April 2026 08:50:26 +0000 (0:00:01.615) 1:04:32.780 ******** 2026-04-16 08:50:42.863450 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.863461 | orchestrator | 2026-04-16 08:50:42.863472 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:50:42.863483 | orchestrator | Thursday 16 April 2026 08:50:27 +0000 (0:00:01.962) 1:04:34.743 ******** 2026-04-16 08:50:42.863494 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-04-16 08:50:42.863507 | orchestrator | 2026-04-16 08:50:42.863518 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:50:42.863528 | orchestrator | Thursday 16 April 2026 08:50:29 +0000 (0:00:01.198) 1:04:35.942 ******** 2026-04-16 08:50:42.863551 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863562 | orchestrator | 2026-04-16 08:50:42.863572 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:50:42.863583 | orchestrator | Thursday 16 April 2026 08:50:30 +0000 (0:00:01.123) 1:04:37.065 ******** 2026-04-16 08:50:42.863598 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863617 | orchestrator | 2026-04-16 08:50:42.863647 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:50:42.863664 | orchestrator | Thursday 16 April 2026 08:50:31 +0000 (0:00:01.099) 1:04:38.164 ******** 2026-04-16 08:50:42.863682 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:50:42.863700 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:50:42.863716 | orchestrator | 2026-04-16 08:50:42.863734 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:50:42.863752 | orchestrator | Thursday 16 April 2026 08:50:33 +0000 (0:00:01.812) 1:04:39.977 ******** 2026-04-16 08:50:42.863769 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.863786 | orchestrator | 2026-04-16 08:50:42.863804 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:50:42.863868 | orchestrator | Thursday 16 April 2026 08:50:34 +0000 (0:00:01.478) 1:04:41.455 ******** 2026-04-16 08:50:42.863886 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863905 | orchestrator | 2026-04-16 08:50:42.863925 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:50:42.863943 | orchestrator | Thursday 16 April 2026 08:50:35 +0000 (0:00:01.119) 1:04:42.575 ******** 2026-04-16 08:50:42.863960 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.863976 | orchestrator | 2026-04-16 08:50:42.863987 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:50:42.863998 | orchestrator | Thursday 16 April 2026 08:50:36 +0000 (0:00:00.779) 1:04:43.355 ******** 2026-04-16 08:50:42.864009 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.864019 | orchestrator | 2026-04-16 08:50:42.864030 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:50:42.864040 | orchestrator | Thursday 16 April 2026 08:50:37 +0000 (0:00:00.768) 1:04:44.124 ******** 2026-04-16 08:50:42.864051 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-04-16 08:50:42.864061 | orchestrator | 2026-04-16 08:50:42.864072 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:50:42.864083 | orchestrator | Thursday 16 April 2026 08:50:38 +0000 (0:00:01.090) 1:04:45.214 ******** 2026-04-16 08:50:42.864094 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:50:42.864132 | orchestrator | 2026-04-16 08:50:42.864144 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:50:42.864154 | orchestrator | Thursday 16 April 2026 08:50:40 +0000 (0:00:01.689) 1:04:46.903 ******** 2026-04-16 08:50:42.864165 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:50:42.864176 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:50:42.864186 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:50:42.864197 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.864208 | orchestrator | 2026-04-16 08:50:42.864218 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:50:42.864228 | orchestrator | Thursday 16 April 2026 08:50:41 +0000 (0:00:01.127) 1:04:48.030 ******** 2026-04-16 08:50:42.864239 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.864250 | orchestrator | 2026-04-16 08:50:42.864261 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:50:42.864281 | orchestrator | Thursday 16 April 2026 08:50:42 +0000 (0:00:01.105) 1:04:49.136 ******** 2026-04-16 08:50:42.864292 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:50:42.864315 | orchestrator | 2026-04-16 08:50:42.864338 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:51:25.380219 | orchestrator | Thursday 16 April 2026 08:50:43 +0000 (0:00:01.175) 1:04:50.312 ******** 2026-04-16 08:51:25.380386 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380420 | orchestrator | 2026-04-16 08:51:25.380443 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:51:25.380462 | orchestrator | Thursday 16 April 2026 08:50:44 +0000 (0:00:01.125) 1:04:51.437 ******** 2026-04-16 08:51:25.380482 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380502 | orchestrator | 2026-04-16 08:51:25.380521 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:51:25.380540 | orchestrator | Thursday 16 April 2026 08:50:45 +0000 (0:00:01.126) 1:04:52.563 ******** 2026-04-16 08:51:25.380560 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380580 | orchestrator | 2026-04-16 08:51:25.380601 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:51:25.380616 | orchestrator | Thursday 16 April 2026 08:50:46 +0000 (0:00:00.781) 1:04:53.345 ******** 2026-04-16 08:51:25.380627 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:51:25.380639 | orchestrator | 2026-04-16 08:51:25.380651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:51:25.380662 | orchestrator | Thursday 16 April 2026 08:50:48 +0000 (0:00:02.208) 1:04:55.554 ******** 2026-04-16 08:51:25.380674 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:51:25.380685 | orchestrator | 2026-04-16 08:51:25.380696 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:51:25.380707 | orchestrator | Thursday 16 April 2026 08:50:49 +0000 (0:00:00.749) 1:04:56.303 ******** 2026-04-16 08:51:25.380718 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-04-16 08:51:25.380729 | orchestrator | 2026-04-16 08:51:25.380741 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:51:25.380752 | orchestrator | Thursday 16 April 2026 08:50:50 +0000 (0:00:01.139) 1:04:57.443 ******** 2026-04-16 08:51:25.380763 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380774 | orchestrator | 2026-04-16 08:51:25.380785 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:51:25.380796 | orchestrator | Thursday 16 April 2026 08:50:51 +0000 (0:00:01.133) 1:04:58.577 ******** 2026-04-16 08:51:25.380807 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380818 | orchestrator | 2026-04-16 08:51:25.380829 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:51:25.380840 | orchestrator | Thursday 16 April 2026 08:50:52 +0000 (0:00:01.119) 1:04:59.697 ******** 2026-04-16 08:51:25.380851 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380862 | orchestrator | 2026-04-16 08:51:25.380873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:51:25.380884 | orchestrator | Thursday 16 April 2026 08:50:54 +0000 (0:00:01.119) 1:05:00.816 ******** 2026-04-16 08:51:25.380895 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380906 | orchestrator | 2026-04-16 08:51:25.380917 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:51:25.380928 | orchestrator | Thursday 16 April 2026 08:50:55 +0000 (0:00:01.170) 1:05:01.987 ******** 2026-04-16 08:51:25.380939 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380949 | orchestrator | 2026-04-16 08:51:25.380960 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:51:25.380971 | orchestrator | Thursday 16 April 2026 08:50:56 +0000 (0:00:01.114) 1:05:03.102 ******** 2026-04-16 08:51:25.380982 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.380993 | orchestrator | 2026-04-16 08:51:25.381004 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:51:25.381015 | orchestrator | Thursday 16 April 2026 08:50:57 +0000 (0:00:01.113) 1:05:04.216 ******** 2026-04-16 08:51:25.381054 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381065 | orchestrator | 2026-04-16 08:51:25.381076 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:51:25.381087 | orchestrator | Thursday 16 April 2026 08:50:58 +0000 (0:00:01.136) 1:05:05.352 ******** 2026-04-16 08:51:25.381098 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381108 | orchestrator | 2026-04-16 08:51:25.381167 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:51:25.381179 | orchestrator | Thursday 16 April 2026 08:50:59 +0000 (0:00:01.121) 1:05:06.473 ******** 2026-04-16 08:51:25.381190 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:51:25.381200 | orchestrator | 2026-04-16 08:51:25.381211 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:51:25.381222 | orchestrator | Thursday 16 April 2026 08:51:00 +0000 (0:00:00.781) 1:05:07.255 ******** 2026-04-16 08:51:25.381233 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-04-16 08:51:25.381245 | orchestrator | 2026-04-16 08:51:25.381256 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:51:25.381267 | orchestrator | Thursday 16 April 2026 08:51:01 +0000 (0:00:01.104) 1:05:08.360 ******** 2026-04-16 08:51:25.381278 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-04-16 08:51:25.381289 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-04-16 08:51:25.381300 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-04-16 08:51:25.381311 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-04-16 08:51:25.381322 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-04-16 08:51:25.381332 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-04-16 08:51:25.381343 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-04-16 08:51:25.381354 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:51:25.381379 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:51:25.381390 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:51:25.381402 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:51:25.381433 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:51:25.381445 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:51:25.381456 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:51:25.381467 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-04-16 08:51:25.381478 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-04-16 08:51:25.381489 | orchestrator | 2026-04-16 08:51:25.381500 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:51:25.381511 | orchestrator | Thursday 16 April 2026 08:51:08 +0000 (0:00:06.698) 1:05:15.059 ******** 2026-04-16 08:51:25.381521 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-04-16 08:51:25.381532 | orchestrator | 2026-04-16 08:51:25.381543 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:51:25.381554 | orchestrator | Thursday 16 April 2026 08:51:09 +0000 (0:00:01.153) 1:05:16.212 ******** 2026-04-16 08:51:25.381565 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:51:25.381577 | orchestrator | 2026-04-16 08:51:25.381588 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:51:25.381599 | orchestrator | Thursday 16 April 2026 08:51:10 +0000 (0:00:01.495) 1:05:17.708 ******** 2026-04-16 08:51:25.381610 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:51:25.381621 | orchestrator | 2026-04-16 08:51:25.381632 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:51:25.381652 | orchestrator | Thursday 16 April 2026 08:51:12 +0000 (0:00:01.583) 1:05:19.292 ******** 2026-04-16 08:51:25.381663 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381674 | orchestrator | 2026-04-16 08:51:25.381684 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:51:25.381695 | orchestrator | Thursday 16 April 2026 08:51:13 +0000 (0:00:00.776) 1:05:20.068 ******** 2026-04-16 08:51:25.381706 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381717 | orchestrator | 2026-04-16 08:51:25.381728 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:51:25.381738 | orchestrator | Thursday 16 April 2026 08:51:14 +0000 (0:00:00.827) 1:05:20.895 ******** 2026-04-16 08:51:25.381749 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381760 | orchestrator | 2026-04-16 08:51:25.381771 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:51:25.381782 | orchestrator | Thursday 16 April 2026 08:51:14 +0000 (0:00:00.810) 1:05:21.706 ******** 2026-04-16 08:51:25.381793 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381803 | orchestrator | 2026-04-16 08:51:25.381814 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:51:25.381825 | orchestrator | Thursday 16 April 2026 08:51:15 +0000 (0:00:00.752) 1:05:22.459 ******** 2026-04-16 08:51:25.381836 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381847 | orchestrator | 2026-04-16 08:51:25.381857 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:51:25.381868 | orchestrator | Thursday 16 April 2026 08:51:16 +0000 (0:00:00.774) 1:05:23.233 ******** 2026-04-16 08:51:25.381879 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381890 | orchestrator | 2026-04-16 08:51:25.381901 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:51:25.381912 | orchestrator | Thursday 16 April 2026 08:51:17 +0000 (0:00:00.787) 1:05:24.021 ******** 2026-04-16 08:51:25.381922 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381933 | orchestrator | 2026-04-16 08:51:25.381944 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:51:25.381955 | orchestrator | Thursday 16 April 2026 08:51:18 +0000 (0:00:00.763) 1:05:24.784 ******** 2026-04-16 08:51:25.381966 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.381976 | orchestrator | 2026-04-16 08:51:25.381987 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:51:25.381998 | orchestrator | Thursday 16 April 2026 08:51:18 +0000 (0:00:00.759) 1:05:25.544 ******** 2026-04-16 08:51:25.382009 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.382085 | orchestrator | 2026-04-16 08:51:25.382097 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:51:25.382108 | orchestrator | Thursday 16 April 2026 08:51:19 +0000 (0:00:00.781) 1:05:26.326 ******** 2026-04-16 08:51:25.382136 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.382147 | orchestrator | 2026-04-16 08:51:25.382158 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:51:25.382169 | orchestrator | Thursday 16 April 2026 08:51:20 +0000 (0:00:00.753) 1:05:27.079 ******** 2026-04-16 08:51:25.382180 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:51:25.382190 | orchestrator | 2026-04-16 08:51:25.382201 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:51:25.382212 | orchestrator | Thursday 16 April 2026 08:51:21 +0000 (0:00:00.754) 1:05:27.834 ******** 2026-04-16 08:51:25.382223 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:51:25.382234 | orchestrator | 2026-04-16 08:51:25.382245 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:51:25.382256 | orchestrator | Thursday 16 April 2026 08:51:25 +0000 (0:00:04.082) 1:05:31.917 ******** 2026-04-16 08:51:25.382282 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:51:25.382294 | orchestrator | 2026-04-16 08:51:25.382313 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:52:05.175576 | orchestrator | Thursday 16 April 2026 08:51:25 +0000 (0:00:00.827) 1:05:32.744 ******** 2026-04-16 08:52:05.175726 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-04-16 08:52:05.175747 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-04-16 08:52:05.175761 | orchestrator | 2026-04-16 08:52:05.175774 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:52:05.175786 | orchestrator | Thursday 16 April 2026 08:51:30 +0000 (0:00:04.791) 1:05:37.535 ******** 2026-04-16 08:52:05.175797 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.175810 | orchestrator | 2026-04-16 08:52:05.175822 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:52:05.175833 | orchestrator | Thursday 16 April 2026 08:51:31 +0000 (0:00:00.819) 1:05:38.355 ******** 2026-04-16 08:52:05.175844 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.175855 | orchestrator | 2026-04-16 08:52:05.175867 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:52:05.175880 | orchestrator | Thursday 16 April 2026 08:51:32 +0000 (0:00:00.759) 1:05:39.115 ******** 2026-04-16 08:52:05.175891 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.175902 | orchestrator | 2026-04-16 08:52:05.175913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:52:05.175924 | orchestrator | Thursday 16 April 2026 08:51:33 +0000 (0:00:00.795) 1:05:39.910 ******** 2026-04-16 08:52:05.175935 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.175946 | orchestrator | 2026-04-16 08:52:05.175957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:52:05.175968 | orchestrator | Thursday 16 April 2026 08:51:33 +0000 (0:00:00.796) 1:05:40.706 ******** 2026-04-16 08:52:05.175978 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.175989 | orchestrator | 2026-04-16 08:52:05.176000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:52:05.176011 | orchestrator | Thursday 16 April 2026 08:51:34 +0000 (0:00:00.783) 1:05:41.490 ******** 2026-04-16 08:52:05.176022 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:52:05.176035 | orchestrator | 2026-04-16 08:52:05.176046 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:52:05.176057 | orchestrator | Thursday 16 April 2026 08:51:35 +0000 (0:00:00.888) 1:05:42.378 ******** 2026-04-16 08:52:05.176068 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:52:05.176082 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:52:05.176096 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:52:05.176109 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.176121 | orchestrator | 2026-04-16 08:52:05.176164 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:52:05.176178 | orchestrator | Thursday 16 April 2026 08:51:36 +0000 (0:00:01.042) 1:05:43.420 ******** 2026-04-16 08:52:05.176191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:52:05.176232 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:52:05.176245 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:52:05.176258 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.176271 | orchestrator | 2026-04-16 08:52:05.176284 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:52:05.176297 | orchestrator | Thursday 16 April 2026 08:51:37 +0000 (0:00:01.030) 1:05:44.451 ******** 2026-04-16 08:52:05.176311 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-04-16 08:52:05.176324 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-04-16 08:52:05.176336 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-04-16 08:52:05.176349 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.176362 | orchestrator | 2026-04-16 08:52:05.176375 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:52:05.176388 | orchestrator | Thursday 16 April 2026 08:51:38 +0000 (0:00:01.032) 1:05:45.483 ******** 2026-04-16 08:52:05.176400 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:52:05.176414 | orchestrator | 2026-04-16 08:52:05.176426 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:52:05.176439 | orchestrator | Thursday 16 April 2026 08:51:39 +0000 (0:00:00.778) 1:05:46.262 ******** 2026-04-16 08:52:05.176450 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-04-16 08:52:05.176461 | orchestrator | 2026-04-16 08:52:05.176472 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:52:05.176483 | orchestrator | Thursday 16 April 2026 08:51:40 +0000 (0:00:00.984) 1:05:47.246 ******** 2026-04-16 08:52:05.176494 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:52:05.176505 | orchestrator | 2026-04-16 08:52:05.176516 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-16 08:52:05.176544 | orchestrator | Thursday 16 April 2026 08:51:41 +0000 (0:00:01.421) 1:05:48.668 ******** 2026-04-16 08:52:05.176556 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-04-16 08:52:05.176567 | orchestrator | 2026-04-16 08:52:05.176599 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 08:52:05.176611 | orchestrator | Thursday 16 April 2026 08:51:43 +0000 (0:00:01.209) 1:05:49.877 ******** 2026-04-16 08:52:05.176622 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:52:05.176633 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 08:52:05.176645 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:52:05.176656 | orchestrator | 2026-04-16 08:52:05.176667 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:52:05.176678 | orchestrator | Thursday 16 April 2026 08:51:46 +0000 (0:00:03.281) 1:05:53.159 ******** 2026-04-16 08:52:05.176689 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 08:52:05.176700 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-04-16 08:52:05.176711 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:52:05.176723 | orchestrator | 2026-04-16 08:52:05.176752 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-16 08:52:05.176774 | orchestrator | Thursday 16 April 2026 08:51:48 +0000 (0:00:02.033) 1:05:55.192 ******** 2026-04-16 08:52:05.176786 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.176797 | orchestrator | 2026-04-16 08:52:05.176808 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-16 08:52:05.176819 | orchestrator | Thursday 16 April 2026 08:51:49 +0000 (0:00:00.753) 1:05:55.946 ******** 2026-04-16 08:52:05.176830 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-04-16 08:52:05.176842 | orchestrator | 2026-04-16 08:52:05.176852 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-16 08:52:05.176863 | orchestrator | Thursday 16 April 2026 08:51:50 +0000 (0:00:01.095) 1:05:57.042 ******** 2026-04-16 08:52:05.176883 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:52:05.176896 | orchestrator | 2026-04-16 08:52:05.176907 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-16 08:52:05.176918 | orchestrator | Thursday 16 April 2026 08:51:51 +0000 (0:00:01.625) 1:05:58.667 ******** 2026-04-16 08:52:05.176929 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:52:05.176940 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-16 08:52:05.176951 | orchestrator | 2026-04-16 08:52:05.176962 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 08:52:05.176972 | orchestrator | Thursday 16 April 2026 08:51:57 +0000 (0:00:05.137) 1:06:03.805 ******** 2026-04-16 08:52:05.176983 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:52:05.176994 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:52:05.177005 | orchestrator | 2026-04-16 08:52:05.177016 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:52:05.177027 | orchestrator | Thursday 16 April 2026 08:52:00 +0000 (0:00:03.137) 1:06:06.942 ******** 2026-04-16 08:52:05.177038 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 08:52:05.177049 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:52:05.177060 | orchestrator | 2026-04-16 08:52:05.177071 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-16 08:52:05.177081 | orchestrator | Thursday 16 April 2026 08:52:01 +0000 (0:00:01.622) 1:06:08.564 ******** 2026-04-16 08:52:05.177092 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-04-16 08:52:05.177103 | orchestrator | 2026-04-16 08:52:05.177114 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-16 08:52:05.177139 | orchestrator | Thursday 16 April 2026 08:52:02 +0000 (0:00:01.138) 1:06:09.702 ******** 2026-04-16 08:52:05.177151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177207 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:52:05.177218 | orchestrator | 2026-04-16 08:52:05.177229 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-16 08:52:05.177240 | orchestrator | Thursday 16 April 2026 08:52:04 +0000 (0:00:01.812) 1:06:11.515 ******** 2026-04-16 08:52:05.177251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:52:05.177297 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:53:11.487884 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:53:11.488034 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:53:11.488056 | orchestrator | 2026-04-16 08:53:11.488071 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-16 08:53:11.488086 | orchestrator | Thursday 16 April 2026 08:52:06 +0000 (0:00:01.543) 1:06:13.059 ******** 2026-04-16 08:53:11.488103 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:53:11.488118 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:53:11.488132 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:53:11.488198 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:53:11.488214 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:53:11.488228 | orchestrator | 2026-04-16 08:53:11.488241 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-16 08:53:11.488256 | orchestrator | Thursday 16 April 2026 08:52:38 +0000 (0:00:31.763) 1:06:44.822 ******** 2026-04-16 08:53:11.488269 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:53:11.488282 | orchestrator | 2026-04-16 08:53:11.488295 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-16 08:53:11.488308 | orchestrator | Thursday 16 April 2026 08:52:38 +0000 (0:00:00.745) 1:06:45.567 ******** 2026-04-16 08:53:11.488324 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:53:11.488338 | orchestrator | 2026-04-16 08:53:11.488352 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-16 08:53:11.488366 | orchestrator | Thursday 16 April 2026 08:52:39 +0000 (0:00:00.759) 1:06:46.326 ******** 2026-04-16 08:53:11.488376 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-04-16 08:53:11.488385 | orchestrator | 2026-04-16 08:53:11.488394 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-16 08:53:11.488402 | orchestrator | Thursday 16 April 2026 08:52:40 +0000 (0:00:01.114) 1:06:47.441 ******** 2026-04-16 08:53:11.488411 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-04-16 08:53:11.488420 | orchestrator | 2026-04-16 08:53:11.488428 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-16 08:53:11.488437 | orchestrator | Thursday 16 April 2026 08:52:41 +0000 (0:00:01.097) 1:06:48.538 ******** 2026-04-16 08:53:11.488445 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:53:11.488455 | orchestrator | 2026-04-16 08:53:11.488464 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-16 08:53:11.488472 | orchestrator | Thursday 16 April 2026 08:52:43 +0000 (0:00:02.033) 1:06:50.572 ******** 2026-04-16 08:53:11.488481 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:53:11.488490 | orchestrator | 2026-04-16 08:53:11.488498 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-16 08:53:11.488507 | orchestrator | Thursday 16 April 2026 08:52:45 +0000 (0:00:02.066) 1:06:52.638 ******** 2026-04-16 08:53:11.488516 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:53:11.488524 | orchestrator | 2026-04-16 08:53:11.488533 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-16 08:53:11.488542 | orchestrator | Thursday 16 April 2026 08:52:48 +0000 (0:00:02.277) 1:06:54.915 ******** 2026-04-16 08:53:11.488551 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-04-16 08:53:11.488559 | orchestrator | 2026-04-16 08:53:11.488580 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-04-16 08:53:11.488588 | orchestrator | 2026-04-16 08:53:11.488597 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:53:11.488609 | orchestrator | Thursday 16 April 2026 08:52:51 +0000 (0:00:03.198) 1:06:58.114 ******** 2026-04-16 08:53:11.488624 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-04-16 08:53:11.488638 | orchestrator | 2026-04-16 08:53:11.488651 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-04-16 08:53:11.488666 | orchestrator | Thursday 16 April 2026 08:52:52 +0000 (0:00:01.107) 1:06:59.221 ******** 2026-04-16 08:53:11.488681 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488695 | orchestrator | 2026-04-16 08:53:11.488708 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-04-16 08:53:11.488724 | orchestrator | Thursday 16 April 2026 08:52:53 +0000 (0:00:01.454) 1:07:00.675 ******** 2026-04-16 08:53:11.488738 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488754 | orchestrator | 2026-04-16 08:53:11.488764 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:53:11.488772 | orchestrator | Thursday 16 April 2026 08:52:55 +0000 (0:00:01.116) 1:07:01.792 ******** 2026-04-16 08:53:11.488793 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488802 | orchestrator | 2026-04-16 08:53:11.488811 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:53:11.488820 | orchestrator | Thursday 16 April 2026 08:52:56 +0000 (0:00:01.441) 1:07:03.234 ******** 2026-04-16 08:53:11.488829 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488837 | orchestrator | 2026-04-16 08:53:11.488865 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-04-16 08:53:11.488875 | orchestrator | Thursday 16 April 2026 08:52:57 +0000 (0:00:01.161) 1:07:04.395 ******** 2026-04-16 08:53:11.488884 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488892 | orchestrator | 2026-04-16 08:53:11.488901 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-04-16 08:53:11.488909 | orchestrator | Thursday 16 April 2026 08:52:58 +0000 (0:00:01.135) 1:07:05.531 ******** 2026-04-16 08:53:11.488918 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488927 | orchestrator | 2026-04-16 08:53:11.488936 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-04-16 08:53:11.488944 | orchestrator | Thursday 16 April 2026 08:52:59 +0000 (0:00:01.125) 1:07:06.657 ******** 2026-04-16 08:53:11.488953 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:11.488962 | orchestrator | 2026-04-16 08:53:11.488970 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-04-16 08:53:11.488979 | orchestrator | Thursday 16 April 2026 08:53:01 +0000 (0:00:01.157) 1:07:07.814 ******** 2026-04-16 08:53:11.488988 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.488996 | orchestrator | 2026-04-16 08:53:11.489005 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-04-16 08:53:11.489013 | orchestrator | Thursday 16 April 2026 08:53:02 +0000 (0:00:01.101) 1:07:08.916 ******** 2026-04-16 08:53:11.489028 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:53:11.489042 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:53:11.489061 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:53:11.489083 | orchestrator | 2026-04-16 08:53:11.489097 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-04-16 08:53:11.489111 | orchestrator | Thursday 16 April 2026 08:53:04 +0000 (0:00:01.882) 1:07:10.798 ******** 2026-04-16 08:53:11.489125 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:11.489137 | orchestrator | 2026-04-16 08:53:11.489188 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-04-16 08:53:11.489201 | orchestrator | Thursday 16 April 2026 08:53:05 +0000 (0:00:01.571) 1:07:12.370 ******** 2026-04-16 08:53:11.489229 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:53:11.489242 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:53:11.489254 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:53:11.489267 | orchestrator | 2026-04-16 08:53:11.489280 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-04-16 08:53:11.489292 | orchestrator | Thursday 16 April 2026 08:53:08 +0000 (0:00:02.798) 1:07:15.168 ******** 2026-04-16 08:53:11.489305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 08:53:11.489318 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 08:53:11.489331 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 08:53:11.489343 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:11.489357 | orchestrator | 2026-04-16 08:53:11.489370 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-04-16 08:53:11.489383 | orchestrator | Thursday 16 April 2026 08:53:09 +0000 (0:00:01.375) 1:07:16.544 ******** 2026-04-16 08:53:11.489398 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-04-16 08:53:11.489414 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-04-16 08:53:11.489428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-04-16 08:53:11.489441 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:11.489455 | orchestrator | 2026-04-16 08:53:11.489468 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-04-16 08:53:11.489481 | orchestrator | Thursday 16 April 2026 08:53:11 +0000 (0:00:01.605) 1:07:18.149 ******** 2026-04-16 08:53:11.489498 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:11.489535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:31.032931 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:31.033059 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033081 | orchestrator | 2026-04-16 08:53:31.033095 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-04-16 08:53:31.033105 | orchestrator | Thursday 16 April 2026 08:53:12 +0000 (0:00:01.149) 1:07:19.299 ******** 2026-04-16 08:53:31.033115 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '73554beccbed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-04-16 08:53:06.140813', 'end': '2026-04-16 08:53:06.178111', 'delta': '0:00:00.037298', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['73554beccbed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-04-16 08:53:31.033174 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '2ad110912802', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-04-16 08:53:06.661235', 'end': '2026-04-16 08:53:06.706108', 'delta': '0:00:00.044873', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['2ad110912802'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-04-16 08:53:31.033187 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6b24f5cd3734', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-04-16 08:53:07.195380', 'end': '2026-04-16 08:53:07.244373', 'delta': '0:00:00.048993', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6b24f5cd3734'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-04-16 08:53:31.033195 | orchestrator | 2026-04-16 08:53:31.033203 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-04-16 08:53:31.033211 | orchestrator | Thursday 16 April 2026 08:53:13 +0000 (0:00:01.169) 1:07:20.469 ******** 2026-04-16 08:53:31.033219 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:31.033228 | orchestrator | 2026-04-16 08:53:31.033236 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-04-16 08:53:31.033244 | orchestrator | Thursday 16 April 2026 08:53:14 +0000 (0:00:01.284) 1:07:21.753 ******** 2026-04-16 08:53:31.033251 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033260 | orchestrator | 2026-04-16 08:53:31.033267 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-04-16 08:53:31.033275 | orchestrator | Thursday 16 April 2026 08:53:16 +0000 (0:00:01.266) 1:07:23.020 ******** 2026-04-16 08:53:31.033283 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:31.033291 | orchestrator | 2026-04-16 08:53:31.033298 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-04-16 08:53:31.033306 | orchestrator | Thursday 16 April 2026 08:53:17 +0000 (0:00:01.136) 1:07:24.157 ******** 2026-04-16 08:53:31.033314 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 08:53:31.033322 | orchestrator | 2026-04-16 08:53:31.033343 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:53:31.033351 | orchestrator | Thursday 16 April 2026 08:53:19 +0000 (0:00:01.903) 1:07:26.060 ******** 2026-04-16 08:53:31.033359 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:31.033366 | orchestrator | 2026-04-16 08:53:31.033374 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-04-16 08:53:31.033382 | orchestrator | Thursday 16 April 2026 08:53:20 +0000 (0:00:01.153) 1:07:27.214 ******** 2026-04-16 08:53:31.033412 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033421 | orchestrator | 2026-04-16 08:53:31.033429 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-04-16 08:53:31.033437 | orchestrator | Thursday 16 April 2026 08:53:21 +0000 (0:00:01.105) 1:07:28.319 ******** 2026-04-16 08:53:31.033445 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033452 | orchestrator | 2026-04-16 08:53:31.033460 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-04-16 08:53:31.033470 | orchestrator | Thursday 16 April 2026 08:53:22 +0000 (0:00:01.209) 1:07:29.528 ******** 2026-04-16 08:53:31.033480 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033489 | orchestrator | 2026-04-16 08:53:31.033499 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-04-16 08:53:31.033508 | orchestrator | Thursday 16 April 2026 08:53:23 +0000 (0:00:01.159) 1:07:30.688 ******** 2026-04-16 08:53:31.033529 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033547 | orchestrator | 2026-04-16 08:53:31.033556 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-04-16 08:53:31.033565 | orchestrator | Thursday 16 April 2026 08:53:25 +0000 (0:00:01.132) 1:07:31.820 ******** 2026-04-16 08:53:31.033574 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:31.033583 | orchestrator | 2026-04-16 08:53:31.033592 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-04-16 08:53:31.033601 | orchestrator | Thursday 16 April 2026 08:53:26 +0000 (0:00:01.192) 1:07:33.012 ******** 2026-04-16 08:53:31.033610 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033619 | orchestrator | 2026-04-16 08:53:31.033629 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-04-16 08:53:31.033638 | orchestrator | Thursday 16 April 2026 08:53:27 +0000 (0:00:01.136) 1:07:34.149 ******** 2026-04-16 08:53:31.033647 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:31.033656 | orchestrator | 2026-04-16 08:53:31.033665 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-04-16 08:53:31.033674 | orchestrator | Thursday 16 April 2026 08:53:28 +0000 (0:00:01.213) 1:07:35.363 ******** 2026-04-16 08:53:31.033683 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:31.033693 | orchestrator | 2026-04-16 08:53:31.033702 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-04-16 08:53:31.033711 | orchestrator | Thursday 16 April 2026 08:53:29 +0000 (0:00:01.127) 1:07:36.491 ******** 2026-04-16 08:53:31.033720 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:31.033729 | orchestrator | 2026-04-16 08:53:31.033738 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-04-16 08:53:31.033747 | orchestrator | Thursday 16 April 2026 08:53:30 +0000 (0:00:01.163) 1:07:37.654 ******** 2026-04-16 08:53:31.033757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.033768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}})  2026-04-16 08:53:31.033784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:53:31.033807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}})  2026-04-16 08:53:31.153780 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.153911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.153942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-04-16 08:53:31.153966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.153984 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:53:31.154083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.154113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}})  2026-04-16 08:53:31.154171 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}})  2026-04-16 08:53:31.154186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.154202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-04-16 08:53:31.154234 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.154263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-04-16 08:53:31.154295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-04-16 08:53:32.461590 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:32.461696 | orchestrator | 2026-04-16 08:53:32.461713 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-04-16 08:53:32.461726 | orchestrator | Thursday 16 April 2026 08:53:32 +0000 (0:00:01.319) 1:07:38.974 ******** 2026-04-16 08:53:32.461741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9', 'dm-uuid-LVM-fFRobsWJJSi2qmm1ob47uuqyznr6XsUbB5l2KW2RGUsyuyPrknrU7KICySLP2Mxh'], 'uuids': ['25948af6-ea3d-47bf-b6b8-1562c64b2d0c'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3', 'scsi-SQEMU_QEMU_HARDDISK_246d5233-913f-43b5-865e-f11d086eabe3'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '246d5233', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-canhtz-WDIM-cSNQ-aj6L-ekuG-TUHQ-N8JXmh', 'scsi-0QEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e', 'scsi-SQEMU_QEMU_HARDDISK_e9d72273-cf2e-45b4-9a8d-8e467f71ab1e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461859 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461884 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-04-16-04-32-41-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461896 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt', 'dm-uuid-CRYPT-LUKS2-af4fa9b9a26b435bb78d02f01d5b278d-uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:32.461952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--4d9f1eac--7172--5024--9561--d385c629a6f5-osd--block--4d9f1eac--7172--5024--9561--d385c629a6f5', 'dm-uuid-LVM-C6wBGBA9hodO8Bb29Gw5u71m1RFwLD6RuBEKXkUhRCEc81DfSMk8arMo7bVDUQjt'], 'uuids': ['af4fa9b9-a26b-435b-b78d-02f01d5b278d'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e9d72273', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['uBEKXk-UhRC-Ec81-DfSM-k8ar-Mo7b-VDUQjt']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679104 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5D6ASH-MOWj-A0uh-g8XL-uNov-bIU1-gX9IX9', 'scsi-0QEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042', 'scsi-SQEMU_QEMU_HARDDISK_e0a81747-53de-4864-82c1-214d11586042'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0a81747', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--44db58af--23ca--547e--81cd--90c78ecf63d9-osd--block--44db58af--23ca--547e--81cd--90c78ecf63d9']}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679307 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679370 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aeef7ba8', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1', 'scsi-SQEMU_QEMU_HARDDISK_aeef7ba8-9496-4124-aafb-d41f3a2fc5cd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679406 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679421 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679434 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh', 'dm-uuid-CRYPT-LUKS2-25948af6ea3d47bfb6b81562c64b2d0c-B5l2KW-2RGU-syuy-Prkn-rU7K-ICyS-LP2Mxh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-04-16 08:53:44.679456 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:44.679470 | orchestrator | 2026-04-16 08:53:44.679483 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-04-16 08:53:44.679495 | orchestrator | Thursday 16 April 2026 08:53:33 +0000 (0:00:01.397) 1:07:40.372 ******** 2026-04-16 08:53:44.679507 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:44.679518 | orchestrator | 2026-04-16 08:53:44.679530 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-04-16 08:53:44.679541 | orchestrator | Thursday 16 April 2026 08:53:35 +0000 (0:00:01.514) 1:07:41.886 ******** 2026-04-16 08:53:44.679552 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:44.679563 | orchestrator | 2026-04-16 08:53:44.679576 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:53:44.679589 | orchestrator | Thursday 16 April 2026 08:53:36 +0000 (0:00:01.124) 1:07:43.011 ******** 2026-04-16 08:53:44.679601 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:53:44.679614 | orchestrator | 2026-04-16 08:53:44.679626 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:53:44.679639 | orchestrator | Thursday 16 April 2026 08:53:37 +0000 (0:00:01.464) 1:07:44.475 ******** 2026-04-16 08:53:44.679651 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:44.679663 | orchestrator | 2026-04-16 08:53:44.679676 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-04-16 08:53:44.679689 | orchestrator | Thursday 16 April 2026 08:53:38 +0000 (0:00:01.109) 1:07:45.585 ******** 2026-04-16 08:53:44.679701 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:44.679713 | orchestrator | 2026-04-16 08:53:44.679726 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-04-16 08:53:44.679738 | orchestrator | Thursday 16 April 2026 08:53:40 +0000 (0:00:01.574) 1:07:47.160 ******** 2026-04-16 08:53:44.679749 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:44.679760 | orchestrator | 2026-04-16 08:53:44.679771 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-04-16 08:53:44.679787 | orchestrator | Thursday 16 April 2026 08:53:41 +0000 (0:00:01.125) 1:07:48.286 ******** 2026-04-16 08:53:44.679798 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-04-16 08:53:44.679810 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-04-16 08:53:44.679821 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-04-16 08:53:44.679832 | orchestrator | 2026-04-16 08:53:44.679843 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-04-16 08:53:44.679854 | orchestrator | Thursday 16 April 2026 08:53:43 +0000 (0:00:01.710) 1:07:49.997 ******** 2026-04-16 08:53:44.679865 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-04-16 08:53:44.679876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-04-16 08:53:44.679887 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-04-16 08:53:44.679899 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:53:44.679910 | orchestrator | 2026-04-16 08:53:44.679921 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-04-16 08:53:44.679932 | orchestrator | Thursday 16 April 2026 08:53:44 +0000 (0:00:01.206) 1:07:51.204 ******** 2026-04-16 08:53:44.679943 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-04-16 08:53:44.679955 | orchestrator | 2026-04-16 08:53:44.679973 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:54:25.961213 | orchestrator | Thursday 16 April 2026 08:53:45 +0000 (0:00:01.126) 1:07:52.330 ******** 2026-04-16 08:54:25.961342 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.961357 | orchestrator | 2026-04-16 08:54:25.961369 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:54:25.961380 | orchestrator | Thursday 16 April 2026 08:53:46 +0000 (0:00:01.141) 1:07:53.471 ******** 2026-04-16 08:54:25.961390 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.961400 | orchestrator | 2026-04-16 08:54:25.961410 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:54:25.961420 | orchestrator | Thursday 16 April 2026 08:53:47 +0000 (0:00:01.122) 1:07:54.594 ******** 2026-04-16 08:54:25.961430 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.961443 | orchestrator | 2026-04-16 08:54:25.961459 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:54:25.961476 | orchestrator | Thursday 16 April 2026 08:53:48 +0000 (0:00:01.159) 1:07:55.754 ******** 2026-04-16 08:54:25.961493 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.961510 | orchestrator | 2026-04-16 08:54:25.961526 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:54:25.961542 | orchestrator | Thursday 16 April 2026 08:53:50 +0000 (0:00:01.250) 1:07:57.004 ******** 2026-04-16 08:54:25.961558 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:54:25.961574 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:54:25.961589 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:54:25.961605 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.961620 | orchestrator | 2026-04-16 08:54:25.961637 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:54:25.961653 | orchestrator | Thursday 16 April 2026 08:53:51 +0000 (0:00:01.381) 1:07:58.386 ******** 2026-04-16 08:54:25.961670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:54:25.961688 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:54:25.961705 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:54:25.961721 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.961735 | orchestrator | 2026-04-16 08:54:25.961747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:54:25.961758 | orchestrator | Thursday 16 April 2026 08:53:53 +0000 (0:00:01.714) 1:08:00.101 ******** 2026-04-16 08:54:25.961769 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:54:25.961780 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:54:25.961791 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:54:25.961802 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.961813 | orchestrator | 2026-04-16 08:54:25.961823 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:54:25.961835 | orchestrator | Thursday 16 April 2026 08:53:54 +0000 (0:00:01.541) 1:08:01.642 ******** 2026-04-16 08:54:25.961845 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.961856 | orchestrator | 2026-04-16 08:54:25.961867 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:54:25.961878 | orchestrator | Thursday 16 April 2026 08:53:55 +0000 (0:00:01.098) 1:08:02.741 ******** 2026-04-16 08:54:25.961889 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 08:54:25.961900 | orchestrator | 2026-04-16 08:54:25.961911 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-04-16 08:54:25.961922 | orchestrator | Thursday 16 April 2026 08:53:57 +0000 (0:00:01.261) 1:08:04.002 ******** 2026-04-16 08:54:25.961933 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:54:25.961944 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:54:25.961978 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:54:25.961990 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:54:25.962001 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:54:25.962013 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-16 08:54:25.962082 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:54:25.962093 | orchestrator | 2026-04-16 08:54:25.962114 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-04-16 08:54:25.962124 | orchestrator | Thursday 16 April 2026 08:53:58 +0000 (0:00:01.695) 1:08:05.697 ******** 2026-04-16 08:54:25.962134 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-04-16 08:54:25.962144 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-04-16 08:54:25.962153 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-04-16 08:54:25.962199 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-04-16 08:54:25.962211 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-04-16 08:54:25.962220 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-04-16 08:54:25.962230 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-04-16 08:54:25.962240 | orchestrator | 2026-04-16 08:54:25.962249 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-04-16 08:54:25.962259 | orchestrator | Thursday 16 April 2026 08:54:01 +0000 (0:00:02.079) 1:08:07.777 ******** 2026-04-16 08:54:25.962268 | orchestrator | changed: [testbed-node-5] 2026-04-16 08:54:25.962278 | orchestrator | 2026-04-16 08:54:25.962306 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-04-16 08:54:25.962316 | orchestrator | Thursday 16 April 2026 08:54:02 +0000 (0:00:01.903) 1:08:09.680 ******** 2026-04-16 08:54:25.962326 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:54:25.962337 | orchestrator | 2026-04-16 08:54:25.962347 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-04-16 08:54:25.962357 | orchestrator | Thursday 16 April 2026 08:54:05 +0000 (0:00:02.687) 1:08:12.368 ******** 2026-04-16 08:54:25.962366 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:54:25.962376 | orchestrator | 2026-04-16 08:54:25.962385 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:54:25.962395 | orchestrator | Thursday 16 April 2026 08:54:07 +0000 (0:00:01.898) 1:08:14.266 ******** 2026-04-16 08:54:25.962405 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-04-16 08:54:25.962415 | orchestrator | 2026-04-16 08:54:25.962425 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:54:25.962434 | orchestrator | Thursday 16 April 2026 08:54:08 +0000 (0:00:01.137) 1:08:15.403 ******** 2026-04-16 08:54:25.962444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-04-16 08:54:25.962453 | orchestrator | 2026-04-16 08:54:25.962463 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:54:25.962473 | orchestrator | Thursday 16 April 2026 08:54:09 +0000 (0:00:01.095) 1:08:16.499 ******** 2026-04-16 08:54:25.962482 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.962492 | orchestrator | 2026-04-16 08:54:25.962501 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:54:25.962511 | orchestrator | Thursday 16 April 2026 08:54:10 +0000 (0:00:01.139) 1:08:17.638 ******** 2026-04-16 08:54:25.962530 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.962540 | orchestrator | 2026-04-16 08:54:25.962550 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:54:25.962559 | orchestrator | Thursday 16 April 2026 08:54:12 +0000 (0:00:01.544) 1:08:19.183 ******** 2026-04-16 08:54:25.962569 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.962579 | orchestrator | 2026-04-16 08:54:25.962588 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:54:25.962598 | orchestrator | Thursday 16 April 2026 08:54:13 +0000 (0:00:01.520) 1:08:20.703 ******** 2026-04-16 08:54:25.962607 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.962617 | orchestrator | 2026-04-16 08:54:25.962627 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:54:25.962636 | orchestrator | Thursday 16 April 2026 08:54:15 +0000 (0:00:01.545) 1:08:22.248 ******** 2026-04-16 08:54:25.962646 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.962655 | orchestrator | 2026-04-16 08:54:25.962665 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:54:25.962675 | orchestrator | Thursday 16 April 2026 08:54:16 +0000 (0:00:01.096) 1:08:23.345 ******** 2026-04-16 08:54:25.962684 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.962694 | orchestrator | 2026-04-16 08:54:25.962703 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:54:25.962713 | orchestrator | Thursday 16 April 2026 08:54:17 +0000 (0:00:01.137) 1:08:24.483 ******** 2026-04-16 08:54:25.962723 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.962732 | orchestrator | 2026-04-16 08:54:25.962742 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:54:25.962752 | orchestrator | Thursday 16 April 2026 08:54:18 +0000 (0:00:01.131) 1:08:25.615 ******** 2026-04-16 08:54:25.962761 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.962771 | orchestrator | 2026-04-16 08:54:25.962781 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:54:25.962790 | orchestrator | Thursday 16 April 2026 08:54:20 +0000 (0:00:01.538) 1:08:27.153 ******** 2026-04-16 08:54:25.962800 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.962809 | orchestrator | 2026-04-16 08:54:25.962819 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:54:25.962828 | orchestrator | Thursday 16 April 2026 08:54:21 +0000 (0:00:01.532) 1:08:28.685 ******** 2026-04-16 08:54:25.962838 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.962848 | orchestrator | 2026-04-16 08:54:25.962862 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:54:25.962872 | orchestrator | Thursday 16 April 2026 08:54:22 +0000 (0:00:00.777) 1:08:29.463 ******** 2026-04-16 08:54:25.962884 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.962900 | orchestrator | 2026-04-16 08:54:25.962916 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:54:25.962932 | orchestrator | Thursday 16 April 2026 08:54:23 +0000 (0:00:00.764) 1:08:30.227 ******** 2026-04-16 08:54:25.962948 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.962964 | orchestrator | 2026-04-16 08:54:25.962981 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:54:25.962997 | orchestrator | Thursday 16 April 2026 08:54:24 +0000 (0:00:00.777) 1:08:31.004 ******** 2026-04-16 08:54:25.963013 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.963029 | orchestrator | 2026-04-16 08:54:25.963044 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:54:25.963059 | orchestrator | Thursday 16 April 2026 08:54:25 +0000 (0:00:00.787) 1:08:31.792 ******** 2026-04-16 08:54:25.963075 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:54:25.963090 | orchestrator | 2026-04-16 08:54:25.963106 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:54:25.963122 | orchestrator | Thursday 16 April 2026 08:54:25 +0000 (0:00:00.787) 1:08:32.579 ******** 2026-04-16 08:54:25.963150 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:54:25.963233 | orchestrator | 2026-04-16 08:54:25.963261 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:55:06.358417 | orchestrator | Thursday 16 April 2026 08:54:26 +0000 (0:00:00.748) 1:08:33.328 ******** 2026-04-16 08:55:06.358516 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358525 | orchestrator | 2026-04-16 08:55:06.358530 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:55:06.358534 | orchestrator | Thursday 16 April 2026 08:54:27 +0000 (0:00:00.816) 1:08:34.145 ******** 2026-04-16 08:55:06.358538 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358543 | orchestrator | 2026-04-16 08:55:06.358575 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:55:06.358580 | orchestrator | Thursday 16 April 2026 08:54:28 +0000 (0:00:00.758) 1:08:34.903 ******** 2026-04-16 08:55:06.358585 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.358590 | orchestrator | 2026-04-16 08:55:06.358594 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:55:06.358599 | orchestrator | Thursday 16 April 2026 08:54:28 +0000 (0:00:00.772) 1:08:35.676 ******** 2026-04-16 08:55:06.358603 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.358607 | orchestrator | 2026-04-16 08:55:06.358614 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-04-16 08:55:06.358619 | orchestrator | Thursday 16 April 2026 08:54:29 +0000 (0:00:00.839) 1:08:36.516 ******** 2026-04-16 08:55:06.358623 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358627 | orchestrator | 2026-04-16 08:55:06.358631 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-04-16 08:55:06.358635 | orchestrator | Thursday 16 April 2026 08:54:30 +0000 (0:00:00.775) 1:08:37.291 ******** 2026-04-16 08:55:06.358638 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358642 | orchestrator | 2026-04-16 08:55:06.358646 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-04-16 08:55:06.358650 | orchestrator | Thursday 16 April 2026 08:54:31 +0000 (0:00:00.782) 1:08:38.073 ******** 2026-04-16 08:55:06.358653 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358657 | orchestrator | 2026-04-16 08:55:06.358661 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-04-16 08:55:06.358665 | orchestrator | Thursday 16 April 2026 08:54:32 +0000 (0:00:00.790) 1:08:38.864 ******** 2026-04-16 08:55:06.358668 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358672 | orchestrator | 2026-04-16 08:55:06.358676 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-04-16 08:55:06.358680 | orchestrator | Thursday 16 April 2026 08:54:32 +0000 (0:00:00.783) 1:08:39.647 ******** 2026-04-16 08:55:06.358683 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358687 | orchestrator | 2026-04-16 08:55:06.358691 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-04-16 08:55:06.358695 | orchestrator | Thursday 16 April 2026 08:54:33 +0000 (0:00:00.741) 1:08:40.389 ******** 2026-04-16 08:55:06.358698 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358702 | orchestrator | 2026-04-16 08:55:06.358706 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-04-16 08:55:06.358710 | orchestrator | Thursday 16 April 2026 08:54:34 +0000 (0:00:00.748) 1:08:41.138 ******** 2026-04-16 08:55:06.358713 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358717 | orchestrator | 2026-04-16 08:55:06.358721 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-04-16 08:55:06.358726 | orchestrator | Thursday 16 April 2026 08:54:35 +0000 (0:00:00.745) 1:08:41.883 ******** 2026-04-16 08:55:06.358729 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358733 | orchestrator | 2026-04-16 08:55:06.358737 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-04-16 08:55:06.358741 | orchestrator | Thursday 16 April 2026 08:54:35 +0000 (0:00:00.763) 1:08:42.646 ******** 2026-04-16 08:55:06.358769 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358774 | orchestrator | 2026-04-16 08:55:06.358778 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-04-16 08:55:06.358782 | orchestrator | Thursday 16 April 2026 08:54:36 +0000 (0:00:00.784) 1:08:43.430 ******** 2026-04-16 08:55:06.358785 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358789 | orchestrator | 2026-04-16 08:55:06.358793 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-04-16 08:55:06.358796 | orchestrator | Thursday 16 April 2026 08:54:37 +0000 (0:00:00.779) 1:08:44.210 ******** 2026-04-16 08:55:06.358800 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358804 | orchestrator | 2026-04-16 08:55:06.358808 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-04-16 08:55:06.358821 | orchestrator | Thursday 16 April 2026 08:54:38 +0000 (0:00:00.789) 1:08:45.000 ******** 2026-04-16 08:55:06.358825 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358828 | orchestrator | 2026-04-16 08:55:06.358832 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-04-16 08:55:06.358836 | orchestrator | Thursday 16 April 2026 08:54:39 +0000 (0:00:00.789) 1:08:45.790 ******** 2026-04-16 08:55:06.358839 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.358843 | orchestrator | 2026-04-16 08:55:06.358847 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-04-16 08:55:06.358851 | orchestrator | Thursday 16 April 2026 08:54:40 +0000 (0:00:01.608) 1:08:47.399 ******** 2026-04-16 08:55:06.358854 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.358858 | orchestrator | 2026-04-16 08:55:06.358862 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-04-16 08:55:06.358865 | orchestrator | Thursday 16 April 2026 08:54:42 +0000 (0:00:01.880) 1:08:49.279 ******** 2026-04-16 08:55:06.358869 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-04-16 08:55:06.358874 | orchestrator | 2026-04-16 08:55:06.358878 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-04-16 08:55:06.358882 | orchestrator | Thursday 16 April 2026 08:54:43 +0000 (0:00:01.096) 1:08:50.376 ******** 2026-04-16 08:55:06.358886 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358889 | orchestrator | 2026-04-16 08:55:06.358893 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-04-16 08:55:06.358908 | orchestrator | Thursday 16 April 2026 08:54:44 +0000 (0:00:01.149) 1:08:51.525 ******** 2026-04-16 08:55:06.358912 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358916 | orchestrator | 2026-04-16 08:55:06.358923 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-04-16 08:55:06.358929 | orchestrator | Thursday 16 April 2026 08:54:45 +0000 (0:00:01.155) 1:08:52.681 ******** 2026-04-16 08:55:06.358935 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-04-16 08:55:06.358941 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-04-16 08:55:06.358947 | orchestrator | 2026-04-16 08:55:06.358952 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-04-16 08:55:06.358958 | orchestrator | Thursday 16 April 2026 08:54:47 +0000 (0:00:01.832) 1:08:54.514 ******** 2026-04-16 08:55:06.358964 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.358970 | orchestrator | 2026-04-16 08:55:06.358975 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-04-16 08:55:06.358981 | orchestrator | Thursday 16 April 2026 08:54:49 +0000 (0:00:01.441) 1:08:55.955 ******** 2026-04-16 08:55:06.358987 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.358992 | orchestrator | 2026-04-16 08:55:06.358998 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-04-16 08:55:06.359004 | orchestrator | Thursday 16 April 2026 08:54:50 +0000 (0:00:01.111) 1:08:57.067 ******** 2026-04-16 08:55:06.359010 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359023 | orchestrator | 2026-04-16 08:55:06.359030 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-04-16 08:55:06.359036 | orchestrator | Thursday 16 April 2026 08:54:51 +0000 (0:00:00.785) 1:08:57.853 ******** 2026-04-16 08:55:06.359042 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359048 | orchestrator | 2026-04-16 08:55:06.359054 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-04-16 08:55:06.359060 | orchestrator | Thursday 16 April 2026 08:54:51 +0000 (0:00:00.763) 1:08:58.617 ******** 2026-04-16 08:55:06.359066 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-04-16 08:55:06.359072 | orchestrator | 2026-04-16 08:55:06.359078 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-04-16 08:55:06.359082 | orchestrator | Thursday 16 April 2026 08:54:52 +0000 (0:00:01.084) 1:08:59.702 ******** 2026-04-16 08:55:06.359085 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.359089 | orchestrator | 2026-04-16 08:55:06.359093 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-04-16 08:55:06.359096 | orchestrator | Thursday 16 April 2026 08:54:54 +0000 (0:00:01.703) 1:09:01.405 ******** 2026-04-16 08:55:06.359100 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-04-16 08:55:06.359104 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-04-16 08:55:06.359108 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-04-16 08:55:06.359112 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359116 | orchestrator | 2026-04-16 08:55:06.359119 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-04-16 08:55:06.359123 | orchestrator | Thursday 16 April 2026 08:54:55 +0000 (0:00:01.149) 1:09:02.555 ******** 2026-04-16 08:55:06.359127 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359131 | orchestrator | 2026-04-16 08:55:06.359135 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-04-16 08:55:06.359139 | orchestrator | Thursday 16 April 2026 08:54:56 +0000 (0:00:01.141) 1:09:03.696 ******** 2026-04-16 08:55:06.359143 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359147 | orchestrator | 2026-04-16 08:55:06.359150 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-04-16 08:55:06.359154 | orchestrator | Thursday 16 April 2026 08:54:58 +0000 (0:00:01.204) 1:09:04.901 ******** 2026-04-16 08:55:06.359158 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359161 | orchestrator | 2026-04-16 08:55:06.359165 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-04-16 08:55:06.359189 | orchestrator | Thursday 16 April 2026 08:54:59 +0000 (0:00:01.127) 1:09:06.029 ******** 2026-04-16 08:55:06.359196 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359202 | orchestrator | 2026-04-16 08:55:06.359208 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-04-16 08:55:06.359219 | orchestrator | Thursday 16 April 2026 08:55:00 +0000 (0:00:01.108) 1:09:07.138 ******** 2026-04-16 08:55:06.359225 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359232 | orchestrator | 2026-04-16 08:55:06.359236 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-04-16 08:55:06.359240 | orchestrator | Thursday 16 April 2026 08:55:01 +0000 (0:00:00.789) 1:09:07.928 ******** 2026-04-16 08:55:06.359243 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.359247 | orchestrator | 2026-04-16 08:55:06.359251 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-04-16 08:55:06.359255 | orchestrator | Thursday 16 April 2026 08:55:03 +0000 (0:00:02.187) 1:09:10.115 ******** 2026-04-16 08:55:06.359258 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:06.359262 | orchestrator | 2026-04-16 08:55:06.359266 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-04-16 08:55:06.359269 | orchestrator | Thursday 16 April 2026 08:55:04 +0000 (0:00:00.755) 1:09:10.871 ******** 2026-04-16 08:55:06.359278 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-04-16 08:55:06.359282 | orchestrator | 2026-04-16 08:55:06.359285 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-04-16 08:55:06.359289 | orchestrator | Thursday 16 April 2026 08:55:05 +0000 (0:00:01.094) 1:09:11.966 ******** 2026-04-16 08:55:06.359293 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:06.359296 | orchestrator | 2026-04-16 08:55:06.359300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-04-16 08:55:06.359309 | orchestrator | Thursday 16 April 2026 08:55:06 +0000 (0:00:01.138) 1:09:13.105 ******** 2026-04-16 08:55:47.253298 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253386 | orchestrator | 2026-04-16 08:55:47.253396 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-04-16 08:55:47.253405 | orchestrator | Thursday 16 April 2026 08:55:07 +0000 (0:00:01.141) 1:09:14.246 ******** 2026-04-16 08:55:47.253411 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253418 | orchestrator | 2026-04-16 08:55:47.253424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-04-16 08:55:47.253431 | orchestrator | Thursday 16 April 2026 08:55:08 +0000 (0:00:01.137) 1:09:15.383 ******** 2026-04-16 08:55:47.253437 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253444 | orchestrator | 2026-04-16 08:55:47.253450 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-04-16 08:55:47.253457 | orchestrator | Thursday 16 April 2026 08:55:09 +0000 (0:00:01.123) 1:09:16.507 ******** 2026-04-16 08:55:47.253463 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253469 | orchestrator | 2026-04-16 08:55:47.253475 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-04-16 08:55:47.253482 | orchestrator | Thursday 16 April 2026 08:55:10 +0000 (0:00:01.138) 1:09:17.645 ******** 2026-04-16 08:55:47.253488 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253494 | orchestrator | 2026-04-16 08:55:47.253500 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-04-16 08:55:47.253507 | orchestrator | Thursday 16 April 2026 08:55:12 +0000 (0:00:01.116) 1:09:18.762 ******** 2026-04-16 08:55:47.253513 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253519 | orchestrator | 2026-04-16 08:55:47.253525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-04-16 08:55:47.253531 | orchestrator | Thursday 16 April 2026 08:55:13 +0000 (0:00:01.159) 1:09:19.922 ******** 2026-04-16 08:55:47.253538 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253544 | orchestrator | 2026-04-16 08:55:47.253550 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-04-16 08:55:47.253557 | orchestrator | Thursday 16 April 2026 08:55:14 +0000 (0:00:01.119) 1:09:21.041 ******** 2026-04-16 08:55:47.253563 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:55:47.253570 | orchestrator | 2026-04-16 08:55:47.253577 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-04-16 08:55:47.253583 | orchestrator | Thursday 16 April 2026 08:55:15 +0000 (0:00:00.868) 1:09:21.909 ******** 2026-04-16 08:55:47.253590 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-04-16 08:55:47.253597 | orchestrator | 2026-04-16 08:55:47.253603 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-04-16 08:55:47.253609 | orchestrator | Thursday 16 April 2026 08:55:16 +0000 (0:00:01.264) 1:09:23.173 ******** 2026-04-16 08:55:47.253616 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-04-16 08:55:47.253622 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-04-16 08:55:47.253629 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-04-16 08:55:47.253635 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-04-16 08:55:47.253641 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-04-16 08:55:47.253666 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-04-16 08:55:47.253672 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-04-16 08:55:47.253679 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-04-16 08:55:47.253685 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-04-16 08:55:47.253691 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-04-16 08:55:47.253697 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-04-16 08:55:47.253703 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-04-16 08:55:47.253710 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-04-16 08:55:47.253716 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-04-16 08:55:47.253722 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-04-16 08:55:47.253728 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-04-16 08:55:47.253734 | orchestrator | 2026-04-16 08:55:47.253740 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-04-16 08:55:47.253757 | orchestrator | Thursday 16 April 2026 08:55:22 +0000 (0:00:06.366) 1:09:29.539 ******** 2026-04-16 08:55:47.253763 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-04-16 08:55:47.253770 | orchestrator | 2026-04-16 08:55:47.253776 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-04-16 08:55:47.253782 | orchestrator | Thursday 16 April 2026 08:55:23 +0000 (0:00:01.091) 1:09:30.631 ******** 2026-04-16 08:55:47.253789 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:55:47.253796 | orchestrator | 2026-04-16 08:55:47.253802 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-04-16 08:55:47.253809 | orchestrator | Thursday 16 April 2026 08:55:25 +0000 (0:00:01.501) 1:09:32.132 ******** 2026-04-16 08:55:47.253815 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:55:47.253821 | orchestrator | 2026-04-16 08:55:47.253828 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-04-16 08:55:47.253834 | orchestrator | Thursday 16 April 2026 08:55:26 +0000 (0:00:01.557) 1:09:33.690 ******** 2026-04-16 08:55:47.253840 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253846 | orchestrator | 2026-04-16 08:55:47.253853 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-04-16 08:55:47.253873 | orchestrator | Thursday 16 April 2026 08:55:27 +0000 (0:00:00.768) 1:09:34.459 ******** 2026-04-16 08:55:47.253881 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253888 | orchestrator | 2026-04-16 08:55:47.253895 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-04-16 08:55:47.253903 | orchestrator | Thursday 16 April 2026 08:55:28 +0000 (0:00:00.759) 1:09:35.218 ******** 2026-04-16 08:55:47.253910 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253917 | orchestrator | 2026-04-16 08:55:47.253924 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-04-16 08:55:47.253931 | orchestrator | Thursday 16 April 2026 08:55:29 +0000 (0:00:00.794) 1:09:36.013 ******** 2026-04-16 08:55:47.253938 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253945 | orchestrator | 2026-04-16 08:55:47.253952 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-04-16 08:55:47.253959 | orchestrator | Thursday 16 April 2026 08:55:30 +0000 (0:00:00.756) 1:09:36.769 ******** 2026-04-16 08:55:47.253966 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.253973 | orchestrator | 2026-04-16 08:55:47.253980 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-04-16 08:55:47.253988 | orchestrator | Thursday 16 April 2026 08:55:30 +0000 (0:00:00.781) 1:09:37.551 ******** 2026-04-16 08:55:47.254000 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254007 | orchestrator | 2026-04-16 08:55:47.254055 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-04-16 08:55:47.254063 | orchestrator | Thursday 16 April 2026 08:55:31 +0000 (0:00:00.777) 1:09:38.329 ******** 2026-04-16 08:55:47.254069 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254075 | orchestrator | 2026-04-16 08:55:47.254082 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-04-16 08:55:47.254088 | orchestrator | Thursday 16 April 2026 08:55:32 +0000 (0:00:00.790) 1:09:39.119 ******** 2026-04-16 08:55:47.254095 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254101 | orchestrator | 2026-04-16 08:55:47.254107 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-04-16 08:55:47.254113 | orchestrator | Thursday 16 April 2026 08:55:33 +0000 (0:00:00.789) 1:09:39.909 ******** 2026-04-16 08:55:47.254120 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254126 | orchestrator | 2026-04-16 08:55:47.254132 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-04-16 08:55:47.254139 | orchestrator | Thursday 16 April 2026 08:55:33 +0000 (0:00:00.769) 1:09:40.679 ******** 2026-04-16 08:55:47.254145 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254151 | orchestrator | 2026-04-16 08:55:47.254157 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-04-16 08:55:47.254164 | orchestrator | Thursday 16 April 2026 08:55:34 +0000 (0:00:00.741) 1:09:41.420 ******** 2026-04-16 08:55:47.254170 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254190 | orchestrator | 2026-04-16 08:55:47.254197 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-04-16 08:55:47.254203 | orchestrator | Thursday 16 April 2026 08:55:35 +0000 (0:00:00.775) 1:09:42.196 ******** 2026-04-16 08:55:47.254210 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-04-16 08:55:47.254216 | orchestrator | 2026-04-16 08:55:47.254222 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-04-16 08:55:47.254228 | orchestrator | Thursday 16 April 2026 08:55:39 +0000 (0:00:04.111) 1:09:46.307 ******** 2026-04-16 08:55:47.254235 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:55:47.254241 | orchestrator | 2026-04-16 08:55:47.254248 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-04-16 08:55:47.254254 | orchestrator | Thursday 16 April 2026 08:55:40 +0000 (0:00:00.827) 1:09:47.134 ******** 2026-04-16 08:55:47.254262 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-04-16 08:55:47.254276 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-04-16 08:55:47.254284 | orchestrator | 2026-04-16 08:55:47.254290 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-04-16 08:55:47.254297 | orchestrator | Thursday 16 April 2026 08:55:44 +0000 (0:00:04.539) 1:09:51.674 ******** 2026-04-16 08:55:47.254303 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254309 | orchestrator | 2026-04-16 08:55:47.254316 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-04-16 08:55:47.254322 | orchestrator | Thursday 16 April 2026 08:55:45 +0000 (0:00:00.783) 1:09:52.457 ******** 2026-04-16 08:55:47.254328 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254339 | orchestrator | 2026-04-16 08:55:47.254345 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-04-16 08:55:47.254352 | orchestrator | Thursday 16 April 2026 08:55:46 +0000 (0:00:00.745) 1:09:53.203 ******** 2026-04-16 08:55:47.254358 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:55:47.254364 | orchestrator | 2026-04-16 08:55:47.254371 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-04-16 08:55:47.254382 | orchestrator | Thursday 16 April 2026 08:55:47 +0000 (0:00:00.797) 1:09:54.000 ******** 2026-04-16 08:56:53.181305 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.181452 | orchestrator | 2026-04-16 08:56:53.181471 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-04-16 08:56:53.181484 | orchestrator | Thursday 16 April 2026 08:55:48 +0000 (0:00:00.788) 1:09:54.789 ******** 2026-04-16 08:56:53.181496 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.181507 | orchestrator | 2026-04-16 08:56:53.181519 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-04-16 08:56:53.181531 | orchestrator | Thursday 16 April 2026 08:55:48 +0000 (0:00:00.790) 1:09:55.580 ******** 2026-04-16 08:56:53.181542 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:56:53.181554 | orchestrator | 2026-04-16 08:56:53.181565 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-04-16 08:56:53.181576 | orchestrator | Thursday 16 April 2026 08:55:49 +0000 (0:00:00.871) 1:09:56.451 ******** 2026-04-16 08:56:53.181587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:56:53.181599 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:56:53.181609 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:56:53.181620 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.181631 | orchestrator | 2026-04-16 08:56:53.181643 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-04-16 08:56:53.181654 | orchestrator | Thursday 16 April 2026 08:55:51 +0000 (0:00:01.411) 1:09:57.863 ******** 2026-04-16 08:56:53.181665 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:56:53.181676 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:56:53.181687 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:56:53.181697 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.181709 | orchestrator | 2026-04-16 08:56:53.181719 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-04-16 08:56:53.181731 | orchestrator | Thursday 16 April 2026 08:55:52 +0000 (0:00:01.043) 1:09:58.906 ******** 2026-04-16 08:56:53.181743 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-04-16 08:56:53.181755 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-04-16 08:56:53.181767 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-04-16 08:56:53.181780 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.181792 | orchestrator | 2026-04-16 08:56:53.181805 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-04-16 08:56:53.181817 | orchestrator | Thursday 16 April 2026 08:55:53 +0000 (0:00:01.030) 1:09:59.937 ******** 2026-04-16 08:56:53.181829 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:56:53.181842 | orchestrator | 2026-04-16 08:56:53.181855 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-04-16 08:56:53.181868 | orchestrator | Thursday 16 April 2026 08:55:53 +0000 (0:00:00.775) 1:10:00.712 ******** 2026-04-16 08:56:53.181880 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-04-16 08:56:53.181893 | orchestrator | 2026-04-16 08:56:53.181905 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-04-16 08:56:53.181917 | orchestrator | Thursday 16 April 2026 08:55:54 +0000 (0:00:00.998) 1:10:01.710 ******** 2026-04-16 08:56:53.181931 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:56:53.181992 | orchestrator | 2026-04-16 08:56:53.182086 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-04-16 08:56:53.182110 | orchestrator | Thursday 16 April 2026 08:55:56 +0000 (0:00:01.327) 1:10:03.038 ******** 2026-04-16 08:56:53.182130 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-04-16 08:56:53.182149 | orchestrator | 2026-04-16 08:56:53.182166 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 08:56:53.182183 | orchestrator | Thursday 16 April 2026 08:55:57 +0000 (0:00:01.099) 1:10:04.138 ******** 2026-04-16 08:56:53.182225 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:56:53.182237 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 08:56:53.182248 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:56:53.182259 | orchestrator | 2026-04-16 08:56:53.182270 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:56:53.182296 | orchestrator | Thursday 16 April 2026 08:56:00 +0000 (0:00:03.152) 1:10:07.290 ******** 2026-04-16 08:56:53.182308 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 08:56:53.182319 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-04-16 08:56:53.182330 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:56:53.182341 | orchestrator | 2026-04-16 08:56:53.182352 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-04-16 08:56:53.182363 | orchestrator | Thursday 16 April 2026 08:56:02 +0000 (0:00:01.976) 1:10:09.267 ******** 2026-04-16 08:56:53.182374 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.182385 | orchestrator | 2026-04-16 08:56:53.182396 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-04-16 08:56:53.182406 | orchestrator | Thursday 16 April 2026 08:56:03 +0000 (0:00:00.786) 1:10:10.054 ******** 2026-04-16 08:56:53.182417 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-04-16 08:56:53.182428 | orchestrator | 2026-04-16 08:56:53.182439 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-04-16 08:56:53.182450 | orchestrator | Thursday 16 April 2026 08:56:04 +0000 (0:00:01.152) 1:10:11.207 ******** 2026-04-16 08:56:53.182461 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:56:53.182474 | orchestrator | 2026-04-16 08:56:53.182485 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-04-16 08:56:53.182495 | orchestrator | Thursday 16 April 2026 08:56:06 +0000 (0:00:01.610) 1:10:12.818 ******** 2026-04-16 08:56:53.182526 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:56:53.182539 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-04-16 08:56:53.182550 | orchestrator | 2026-04-16 08:56:53.182561 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-04-16 08:56:53.182572 | orchestrator | Thursday 16 April 2026 08:56:11 +0000 (0:00:05.072) 1:10:17.891 ******** 2026-04-16 08:56:53.182583 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-04-16 08:56:53.182593 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-04-16 08:56:53.182604 | orchestrator | 2026-04-16 08:56:53.182615 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-04-16 08:56:53.182625 | orchestrator | Thursday 16 April 2026 08:56:14 +0000 (0:00:03.160) 1:10:21.051 ******** 2026-04-16 08:56:53.182636 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 08:56:53.182647 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:56:53.182658 | orchestrator | 2026-04-16 08:56:53.182669 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-04-16 08:56:53.182680 | orchestrator | Thursday 16 April 2026 08:56:15 +0000 (0:00:01.663) 1:10:22.715 ******** 2026-04-16 08:56:53.182703 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-04-16 08:56:53.182714 | orchestrator | 2026-04-16 08:56:53.182725 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-04-16 08:56:53.182736 | orchestrator | Thursday 16 April 2026 08:56:17 +0000 (0:00:01.122) 1:10:23.838 ******** 2026-04-16 08:56:53.182747 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182759 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182803 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.182813 | orchestrator | 2026-04-16 08:56:53.182824 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-04-16 08:56:53.182835 | orchestrator | Thursday 16 April 2026 08:56:18 +0000 (0:00:01.549) 1:10:25.387 ******** 2026-04-16 08:56:53.182846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182868 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-04-16 08:56:53.182907 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.182925 | orchestrator | 2026-04-16 08:56:53.182953 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-04-16 08:56:53.182983 | orchestrator | Thursday 16 April 2026 08:56:20 +0000 (0:00:01.888) 1:10:27.276 ******** 2026-04-16 08:56:53.183002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:56:53.183020 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:56:53.183037 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:56:53.183054 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:56:53.183074 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-04-16 08:56:53.183092 | orchestrator | 2026-04-16 08:56:53.183111 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-04-16 08:56:53.183128 | orchestrator | Thursday 16 April 2026 08:56:52 +0000 (0:00:31.892) 1:10:59.169 ******** 2026-04-16 08:56:53.183147 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:56:53.183165 | orchestrator | 2026-04-16 08:56:53.183182 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-04-16 08:56:53.183245 | orchestrator | Thursday 16 April 2026 08:56:53 +0000 (0:00:00.758) 1:10:59.927 ******** 2026-04-16 08:57:43.053367 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.053558 | orchestrator | 2026-04-16 08:57:43.053579 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-04-16 08:57:43.053593 | orchestrator | Thursday 16 April 2026 08:56:53 +0000 (0:00:00.751) 1:11:00.679 ******** 2026-04-16 08:57:43.053604 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-04-16 08:57:43.053617 | orchestrator | 2026-04-16 08:57:43.053628 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-04-16 08:57:43.053639 | orchestrator | Thursday 16 April 2026 08:56:55 +0000 (0:00:01.193) 1:11:01.872 ******** 2026-04-16 08:57:43.053650 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-04-16 08:57:43.053661 | orchestrator | 2026-04-16 08:57:43.053672 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-04-16 08:57:43.053683 | orchestrator | Thursday 16 April 2026 08:56:56 +0000 (0:00:01.143) 1:11:03.016 ******** 2026-04-16 08:57:43.053694 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.053706 | orchestrator | 2026-04-16 08:57:43.053717 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-04-16 08:57:43.053727 | orchestrator | Thursday 16 April 2026 08:56:58 +0000 (0:00:02.054) 1:11:05.070 ******** 2026-04-16 08:57:43.053738 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.053749 | orchestrator | 2026-04-16 08:57:43.053760 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-04-16 08:57:43.053771 | orchestrator | Thursday 16 April 2026 08:57:00 +0000 (0:00:01.874) 1:11:06.944 ******** 2026-04-16 08:57:43.053781 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.053793 | orchestrator | 2026-04-16 08:57:43.053806 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-04-16 08:57:43.053819 | orchestrator | Thursday 16 April 2026 08:57:02 +0000 (0:00:02.272) 1:11:09.217 ******** 2026-04-16 08:57:43.053833 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-04-16 08:57:43.053847 | orchestrator | 2026-04-16 08:57:43.053859 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-04-16 08:57:43.053872 | orchestrator | skipping: no hosts matched 2026-04-16 08:57:43.053885 | orchestrator | 2026-04-16 08:57:43.053897 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-04-16 08:57:43.053911 | orchestrator | skipping: no hosts matched 2026-04-16 08:57:43.053923 | orchestrator | 2026-04-16 08:57:43.053935 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-04-16 08:57:43.053948 | orchestrator | skipping: no hosts matched 2026-04-16 08:57:43.053960 | orchestrator | 2026-04-16 08:57:43.053972 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-04-16 08:57:43.053985 | orchestrator | 2026-04-16 08:57:43.053998 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-04-16 08:57:43.054011 | orchestrator | Thursday 16 April 2026 08:57:07 +0000 (0:00:04.952) 1:11:14.169 ******** 2026-04-16 08:57:43.054092 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:57:43.054105 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:57:43.054127 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:57:43.054154 | orchestrator | changed: [testbed-node-3] 2026-04-16 08:57:43.054167 | orchestrator | changed: [testbed-node-4] 2026-04-16 08:57:43.054177 | orchestrator | changed: [testbed-node-5] 2026-04-16 08:57:43.054188 | orchestrator | 2026-04-16 08:57:43.054214 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-04-16 08:57:43.054226 | orchestrator | Thursday 16 April 2026 08:57:10 +0000 (0:00:02.771) 1:11:16.940 ******** 2026-04-16 08:57:43.054237 | orchestrator | changed: [testbed-node-3] 2026-04-16 08:57:43.054339 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:57:43.054382 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:57:43.054394 | orchestrator | changed: [testbed-node-4] 2026-04-16 08:57:43.054405 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:57:43.054416 | orchestrator | changed: [testbed-node-5] 2026-04-16 08:57:43.054427 | orchestrator | 2026-04-16 08:57:43.054438 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:57:43.054449 | orchestrator | Thursday 16 April 2026 08:57:13 +0000 (0:00:03.252) 1:11:20.193 ******** 2026-04-16 08:57:43.054460 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.054471 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.054482 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.054508 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.054519 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.054530 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.054541 | orchestrator | 2026-04-16 08:57:43.054551 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:57:43.054562 | orchestrator | Thursday 16 April 2026 08:57:15 +0000 (0:00:02.105) 1:11:22.298 ******** 2026-04-16 08:57:43.054573 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.054583 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.054594 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.054605 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.054618 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.054636 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.054654 | orchestrator | 2026-04-16 08:57:43.054673 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-04-16 08:57:43.054692 | orchestrator | Thursday 16 April 2026 08:57:17 +0000 (0:00:02.076) 1:11:24.375 ******** 2026-04-16 08:57:43.054711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 08:57:43.054731 | orchestrator | 2026-04-16 08:57:43.054749 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-04-16 08:57:43.054768 | orchestrator | Thursday 16 April 2026 08:57:19 +0000 (0:00:02.115) 1:11:26.491 ******** 2026-04-16 08:57:43.054787 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 08:57:43.054806 | orchestrator | 2026-04-16 08:57:43.054852 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-04-16 08:57:43.054865 | orchestrator | Thursday 16 April 2026 08:57:21 +0000 (0:00:02.122) 1:11:28.613 ******** 2026-04-16 08:57:43.054875 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:57:43.054894 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.054912 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:57:43.054931 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.054950 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.054964 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.054975 | orchestrator | 2026-04-16 08:57:43.054986 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-04-16 08:57:43.054997 | orchestrator | Thursday 16 April 2026 08:57:23 +0000 (0:00:01.975) 1:11:30.589 ******** 2026-04-16 08:57:43.055007 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:57:43.055018 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:57:43.055029 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:57:43.055040 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.055050 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.055061 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.055071 | orchestrator | 2026-04-16 08:57:43.055082 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-04-16 08:57:43.055092 | orchestrator | Thursday 16 April 2026 08:57:26 +0000 (0:00:02.450) 1:11:33.040 ******** 2026-04-16 08:57:43.055103 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:57:43.055114 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:57:43.055125 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:57:43.055148 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.055159 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.055169 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.055180 | orchestrator | 2026-04-16 08:57:43.055191 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-04-16 08:57:43.055233 | orchestrator | Thursday 16 April 2026 08:57:28 +0000 (0:00:02.078) 1:11:35.118 ******** 2026-04-16 08:57:43.055246 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:57:43.055256 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:57:43.055267 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:57:43.055285 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.055303 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.055323 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.055343 | orchestrator | 2026-04-16 08:57:43.055364 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-04-16 08:57:43.055383 | orchestrator | Thursday 16 April 2026 08:57:30 +0000 (0:00:02.199) 1:11:37.318 ******** 2026-04-16 08:57:43.055398 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:57:43.055409 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.055420 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:57:43.055430 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.055441 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.055452 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.055462 | orchestrator | 2026-04-16 08:57:43.055473 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-04-16 08:57:43.055484 | orchestrator | Thursday 16 April 2026 08:57:32 +0000 (0:00:01.988) 1:11:39.307 ******** 2026-04-16 08:57:43.055495 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:57:43.055506 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:57:43.055516 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:57:43.055527 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:57:43.055538 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:57:43.055548 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.055559 | orchestrator | 2026-04-16 08:57:43.055570 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-04-16 08:57:43.055580 | orchestrator | Thursday 16 April 2026 08:57:34 +0000 (0:00:01.851) 1:11:41.159 ******** 2026-04-16 08:57:43.055591 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:57:43.055602 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:57:43.055613 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:57:43.055624 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:57:43.055634 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:57:43.055645 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.055656 | orchestrator | 2026-04-16 08:57:43.055666 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-04-16 08:57:43.055677 | orchestrator | Thursday 16 April 2026 08:57:36 +0000 (0:00:01.796) 1:11:42.955 ******** 2026-04-16 08:57:43.055688 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.055699 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.055709 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.055720 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.055731 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.055750 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.055761 | orchestrator | 2026-04-16 08:57:43.055772 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-04-16 08:57:43.055783 | orchestrator | Thursday 16 April 2026 08:57:38 +0000 (0:00:02.072) 1:11:45.028 ******** 2026-04-16 08:57:43.055793 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.055804 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.055815 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.055825 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:57:43.055836 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:57:43.055846 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:57:43.055857 | orchestrator | 2026-04-16 08:57:43.055868 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-04-16 08:57:43.055889 | orchestrator | Thursday 16 April 2026 08:57:40 +0000 (0:00:02.198) 1:11:47.227 ******** 2026-04-16 08:57:43.055900 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:57:43.055911 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:57:43.055922 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:57:43.055932 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:57:43.055945 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:57:43.055963 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.055983 | orchestrator | 2026-04-16 08:57:43.055994 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-04-16 08:57:43.056005 | orchestrator | Thursday 16 April 2026 08:57:42 +0000 (0:00:01.643) 1:11:48.871 ******** 2026-04-16 08:57:43.056016 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:57:43.056027 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:57:43.056038 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:57:43.056049 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:57:43.056059 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:57:43.056070 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:57:43.056081 | orchestrator | 2026-04-16 08:57:43.056100 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-04-16 08:58:39.095690 | orchestrator | Thursday 16 April 2026 08:57:44 +0000 (0:00:01.915) 1:11:50.786 ******** 2026-04-16 08:58:39.095823 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.095836 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.095845 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.095854 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.095864 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.095871 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.095880 | orchestrator | 2026-04-16 08:58:39.095889 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-04-16 08:58:39.095898 | orchestrator | Thursday 16 April 2026 08:57:46 +0000 (0:00:02.039) 1:11:52.825 ******** 2026-04-16 08:58:39.095906 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.095914 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.095922 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.095930 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.095938 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.095946 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.095954 | orchestrator | 2026-04-16 08:58:39.095962 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-04-16 08:58:39.095970 | orchestrator | Thursday 16 April 2026 08:57:48 +0000 (0:00:01.950) 1:11:54.776 ******** 2026-04-16 08:58:39.095978 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.095986 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.095994 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.096002 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.096010 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.096018 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.096026 | orchestrator | 2026-04-16 08:58:39.096034 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-04-16 08:58:39.096042 | orchestrator | Thursday 16 April 2026 08:57:49 +0000 (0:00:01.907) 1:11:56.684 ******** 2026-04-16 08:58:39.096050 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.096058 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.096066 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.096074 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.096082 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.096089 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.096097 | orchestrator | 2026-04-16 08:58:39.096105 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-04-16 08:58:39.096113 | orchestrator | Thursday 16 April 2026 08:57:51 +0000 (0:00:01.891) 1:11:58.576 ******** 2026-04-16 08:58:39.096121 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.096129 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.096178 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.096188 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.096196 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.096205 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.096234 | orchestrator | 2026-04-16 08:58:39.096244 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-04-16 08:58:39.096253 | orchestrator | Thursday 16 April 2026 08:57:53 +0000 (0:00:01.881) 1:12:00.457 ******** 2026-04-16 08:58:39.096263 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096272 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096281 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096291 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.096300 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.096309 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.096318 | orchestrator | 2026-04-16 08:58:39.096327 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-04-16 08:58:39.096336 | orchestrator | Thursday 16 April 2026 08:57:55 +0000 (0:00:01.864) 1:12:02.321 ******** 2026-04-16 08:58:39.096346 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096355 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096364 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096373 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.096381 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.096390 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.096399 | orchestrator | 2026-04-16 08:58:39.096408 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-04-16 08:58:39.096417 | orchestrator | Thursday 16 April 2026 08:57:57 +0000 (0:00:02.276) 1:12:04.598 ******** 2026-04-16 08:58:39.096427 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096436 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096445 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096454 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.096480 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.096489 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.096499 | orchestrator | 2026-04-16 08:58:39.096508 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-04-16 08:58:39.096517 | orchestrator | Thursday 16 April 2026 08:57:59 +0000 (0:00:02.135) 1:12:06.734 ******** 2026-04-16 08:58:39.096525 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096533 | orchestrator | 2026-04-16 08:58:39.096541 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-04-16 08:58:39.096549 | orchestrator | Thursday 16 April 2026 08:58:03 +0000 (0:00:03.681) 1:12:10.416 ******** 2026-04-16 08:58:39.096557 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096564 | orchestrator | 2026-04-16 08:58:39.096572 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-04-16 08:58:39.096580 | orchestrator | Thursday 16 April 2026 08:58:06 +0000 (0:00:03.209) 1:12:13.626 ******** 2026-04-16 08:58:39.096588 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096596 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096603 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096611 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.096619 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.096626 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.096634 | orchestrator | 2026-04-16 08:58:39.096642 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-04-16 08:58:39.096650 | orchestrator | Thursday 16 April 2026 08:58:09 +0000 (0:00:02.550) 1:12:16.176 ******** 2026-04-16 08:58:39.096658 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096666 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096673 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096681 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.096689 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.096697 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.096704 | orchestrator | 2026-04-16 08:58:39.096712 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-04-16 08:58:39.096744 | orchestrator | Thursday 16 April 2026 08:58:11 +0000 (0:00:02.420) 1:12:18.597 ******** 2026-04-16 08:58:39.096754 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 08:58:39.096764 | orchestrator | 2026-04-16 08:58:39.096772 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-04-16 08:58:39.096780 | orchestrator | Thursday 16 April 2026 08:58:14 +0000 (0:00:02.380) 1:12:20.978 ******** 2026-04-16 08:58:39.096788 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096796 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096804 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096811 | orchestrator | ok: [testbed-node-3] 2026-04-16 08:58:39.096819 | orchestrator | ok: [testbed-node-4] 2026-04-16 08:58:39.096827 | orchestrator | ok: [testbed-node-5] 2026-04-16 08:58:39.096835 | orchestrator | 2026-04-16 08:58:39.096843 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-04-16 08:58:39.096850 | orchestrator | Thursday 16 April 2026 08:58:16 +0000 (0:00:02.572) 1:12:23.550 ******** 2026-04-16 08:58:39.096858 | orchestrator | changed: [testbed-node-3] 2026-04-16 08:58:39.096866 | orchestrator | changed: [testbed-node-4] 2026-04-16 08:58:39.096874 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:58:39.096882 | orchestrator | changed: [testbed-node-2] 2026-04-16 08:58:39.096890 | orchestrator | changed: [testbed-node-5] 2026-04-16 08:58:39.096898 | orchestrator | changed: [testbed-node-1] 2026-04-16 08:58:39.096905 | orchestrator | 2026-04-16 08:58:39.096913 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-04-16 08:58:39.096921 | orchestrator | 2026-04-16 08:58:39.096929 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:58:39.096937 | orchestrator | Thursday 16 April 2026 08:58:21 +0000 (0:00:05.123) 1:12:28.674 ******** 2026-04-16 08:58:39.096945 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.096953 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.096961 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.096969 | orchestrator | 2026-04-16 08:58:39.096977 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:58:39.096985 | orchestrator | Thursday 16 April 2026 08:58:23 +0000 (0:00:01.671) 1:12:30.345 ******** 2026-04-16 08:58:39.096993 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.097001 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:58:39.097008 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:58:39.097016 | orchestrator | 2026-04-16 08:58:39.097024 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-16 08:58:39.097033 | orchestrator | Thursday 16 April 2026 08:58:24 +0000 (0:00:01.391) 1:12:31.737 ******** 2026-04-16 08:58:39.097041 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:58:39.097049 | orchestrator | 2026-04-16 08:58:39.097057 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-04-16 08:58:39.097065 | orchestrator | Thursday 16 April 2026 08:58:27 +0000 (0:00:02.311) 1:12:34.048 ******** 2026-04-16 08:58:39.097073 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.097081 | orchestrator | 2026-04-16 08:58:39.097089 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-04-16 08:58:39.097097 | orchestrator | 2026-04-16 08:58:39.097105 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-04-16 08:58:39.097112 | orchestrator | Thursday 16 April 2026 08:58:29 +0000 (0:00:02.135) 1:12:36.184 ******** 2026-04-16 08:58:39.097120 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.097128 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.097136 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.097144 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.097152 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.097159 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.097172 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:58:39.097180 | orchestrator | 2026-04-16 08:58:39.097188 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:58:39.097196 | orchestrator | Thursday 16 April 2026 08:58:31 +0000 (0:00:01.993) 1:12:38.177 ******** 2026-04-16 08:58:39.097204 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.097228 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.097237 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.097244 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.097252 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.097265 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.097273 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:58:39.097281 | orchestrator | 2026-04-16 08:58:39.097289 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-04-16 08:58:39.097297 | orchestrator | Thursday 16 April 2026 08:58:33 +0000 (0:00:02.296) 1:12:40.474 ******** 2026-04-16 08:58:39.097305 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.097312 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.097320 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.097332 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.097346 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.097363 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.097380 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:58:39.097397 | orchestrator | 2026-04-16 08:58:39.097409 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-04-16 08:58:39.097422 | orchestrator | Thursday 16 April 2026 08:58:36 +0000 (0:00:02.312) 1:12:42.786 ******** 2026-04-16 08:58:39.097434 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.097447 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.097460 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.097472 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:58:39.097484 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:58:39.097498 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:58:39.097511 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:58:39.097525 | orchestrator | 2026-04-16 08:58:39.097539 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-04-16 08:58:39.097553 | orchestrator | Thursday 16 April 2026 08:58:38 +0000 (0:00:02.606) 1:12:45.393 ******** 2026-04-16 08:58:39.097562 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:58:39.097570 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:58:39.097577 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:58:39.097593 | orchestrator | skipping: [testbed-node-3] 2026-04-16 08:59:24.445576 | orchestrator | skipping: [testbed-node-4] 2026-04-16 08:59:24.445677 | orchestrator | skipping: [testbed-node-5] 2026-04-16 08:59:24.445686 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445694 | orchestrator | 2026-04-16 08:59:24.445702 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-04-16 08:59:24.445711 | orchestrator | 2026-04-16 08:59:24.445718 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-04-16 08:59:24.445725 | orchestrator | Thursday 16 April 2026 08:58:41 +0000 (0:00:02.832) 1:12:48.225 ******** 2026-04-16 08:59:24.445733 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-04-16 08:59:24.445741 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-04-16 08:59:24.445748 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-04-16 08:59:24.445754 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445761 | orchestrator | 2026-04-16 08:59:24.445768 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-16 08:59:24.445774 | orchestrator | Thursday 16 April 2026 08:58:42 +0000 (0:00:01.134) 1:12:49.360 ******** 2026-04-16 08:59:24.445781 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445787 | orchestrator | 2026-04-16 08:59:24.445794 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-16 08:59:24.445822 | orchestrator | Thursday 16 April 2026 08:58:43 +0000 (0:00:01.086) 1:12:50.447 ******** 2026-04-16 08:59:24.445830 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445836 | orchestrator | 2026-04-16 08:59:24.445843 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-16 08:59:24.445849 | orchestrator | Thursday 16 April 2026 08:58:44 +0000 (0:00:01.102) 1:12:51.549 ******** 2026-04-16 08:59:24.445856 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445862 | orchestrator | 2026-04-16 08:59:24.445869 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-16 08:59:24.445875 | orchestrator | Thursday 16 April 2026 08:58:45 +0000 (0:00:01.128) 1:12:52.678 ******** 2026-04-16 08:59:24.445882 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445888 | orchestrator | 2026-04-16 08:59:24.445895 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-04-16 08:59:24.445901 | orchestrator | Thursday 16 April 2026 08:58:47 +0000 (0:00:01.100) 1:12:53.778 ******** 2026-04-16 08:59:24.445908 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-04-16 08:59:24.445914 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-04-16 08:59:24.445921 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445927 | orchestrator | 2026-04-16 08:59:24.445934 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-04-16 08:59:24.445940 | orchestrator | Thursday 16 April 2026 08:58:48 +0000 (0:00:01.180) 1:12:54.958 ******** 2026-04-16 08:59:24.445947 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445953 | orchestrator | 2026-04-16 08:59:24.445960 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-04-16 08:59:24.445967 | orchestrator | Thursday 16 April 2026 08:58:49 +0000 (0:00:01.114) 1:12:56.073 ******** 2026-04-16 08:59:24.445973 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.445980 | orchestrator | 2026-04-16 08:59:24.445986 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-04-16 08:59:24.445993 | orchestrator | Thursday 16 April 2026 08:58:50 +0000 (0:00:01.114) 1:12:57.188 ******** 2026-04-16 08:59:24.445999 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.446006 | orchestrator | 2026-04-16 08:59:24.446012 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-04-16 08:59:24.446067 | orchestrator | Thursday 16 April 2026 08:58:51 +0000 (0:00:01.100) 1:12:58.289 ******** 2026-04-16 08:59:24.446074 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-04-16 08:59:24.446081 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-04-16 08:59:24.446087 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.446094 | orchestrator | 2026-04-16 08:59:24.446100 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-04-16 08:59:24.446106 | orchestrator | Thursday 16 April 2026 08:58:52 +0000 (0:00:01.123) 1:12:59.412 ******** 2026-04-16 08:59:24.446125 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.446132 | orchestrator | 2026-04-16 08:59:24.446140 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-04-16 08:59:24.446147 | orchestrator | Thursday 16 April 2026 08:58:53 +0000 (0:00:01.113) 1:13:00.526 ******** 2026-04-16 08:59:24.446153 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.446159 | orchestrator | 2026-04-16 08:59:24.446166 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-04-16 08:59:24.446172 | orchestrator | Thursday 16 April 2026 08:58:54 +0000 (0:00:01.105) 1:13:01.631 ******** 2026-04-16 08:59:24.446177 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.446183 | orchestrator | 2026-04-16 08:59:24.446189 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-04-16 08:59:24.446195 | orchestrator | Thursday 16 April 2026 08:58:56 +0000 (0:00:01.146) 1:13:02.778 ******** 2026-04-16 08:59:24.446202 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:24.446214 | orchestrator | 2026-04-16 08:59:24.446219 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-04-16 08:59:24.446288 | orchestrator | 2026-04-16 08:59:24.446296 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-04-16 08:59:24.446303 | orchestrator | Thursday 16 April 2026 08:58:57 +0000 (0:00:01.895) 1:13:04.674 ******** 2026-04-16 08:59:24.446310 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446318 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:59:24.446325 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:59:24.446332 | orchestrator | 2026-04-16 08:59:24.446339 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-04-16 08:59:24.446346 | orchestrator | Thursday 16 April 2026 08:58:59 +0000 (0:00:01.358) 1:13:06.032 ******** 2026-04-16 08:59:24.446353 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446360 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:59:24.446384 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:59:24.446392 | orchestrator | 2026-04-16 08:59:24.446399 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-04-16 08:59:24.446406 | orchestrator | Thursday 16 April 2026 08:59:00 +0000 (0:00:01.417) 1:13:07.450 ******** 2026-04-16 08:59:24.446413 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446420 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:59:24.446427 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:59:24.446435 | orchestrator | 2026-04-16 08:59:24.446442 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-04-16 08:59:24.446449 | orchestrator | Thursday 16 April 2026 08:59:02 +0000 (0:00:01.366) 1:13:08.816 ******** 2026-04-16 08:59:24.446456 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446463 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:59:24.446470 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:59:24.446477 | orchestrator | 2026-04-16 08:59:24.446484 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-04-16 08:59:24.446492 | orchestrator | Thursday 16 April 2026 08:59:03 +0000 (0:00:01.358) 1:13:10.175 ******** 2026-04-16 08:59:24.446499 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446506 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:59:24.446513 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:59:24.446520 | orchestrator | 2026-04-16 08:59:24.446526 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-04-16 08:59:24.446533 | orchestrator | Thursday 16 April 2026 08:59:04 +0000 (0:00:01.299) 1:13:11.475 ******** 2026-04-16 08:59:24.446539 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446546 | orchestrator | skipping: [testbed-node-1] 2026-04-16 08:59:24.446552 | orchestrator | skipping: [testbed-node-2] 2026-04-16 08:59:24.446559 | orchestrator | 2026-04-16 08:59:24.446566 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-04-16 08:59:24.446572 | orchestrator | Thursday 16 April 2026 08:59:06 +0000 (0:00:01.605) 1:13:13.080 ******** 2026-04-16 08:59:24.446578 | orchestrator | skipping: [testbed-node-0] 2026-04-16 08:59:24.446585 | orchestrator | 2026-04-16 08:59:24.446591 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-04-16 08:59:24.446598 | orchestrator | 2026-04-16 08:59:24.446604 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-04-16 08:59:24.446611 | orchestrator | Thursday 16 April 2026 08:59:07 +0000 (0:00:01.484) 1:13:14.564 ******** 2026-04-16 08:59:24.446617 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446624 | orchestrator | 2026-04-16 08:59:24.446630 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-04-16 08:59:24.446637 | orchestrator | Thursday 16 April 2026 08:59:09 +0000 (0:00:01.431) 1:13:15.996 ******** 2026-04-16 08:59:24.446643 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446650 | orchestrator | 2026-04-16 08:59:24.446656 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-04-16 08:59:24.446663 | orchestrator | Thursday 16 April 2026 08:59:10 +0000 (0:00:01.117) 1:13:17.113 ******** 2026-04-16 08:59:24.446677 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446684 | orchestrator | 2026-04-16 08:59:24.446690 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-04-16 08:59:24.446697 | orchestrator | Thursday 16 April 2026 08:59:11 +0000 (0:00:01.100) 1:13:18.213 ******** 2026-04-16 08:59:24.446703 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446710 | orchestrator | 2026-04-16 08:59:24.446716 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-04-16 08:59:24.446723 | orchestrator | Thursday 16 April 2026 08:59:14 +0000 (0:00:02.924) 1:13:21.138 ******** 2026-04-16 08:59:24.446729 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446736 | orchestrator | 2026-04-16 08:59:24.446742 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-04-16 08:59:24.446749 | orchestrator | Thursday 16 April 2026 08:59:17 +0000 (0:00:03.039) 1:13:24.178 ******** 2026-04-16 08:59:24.446755 | orchestrator | changed: [testbed-node-0] 2026-04-16 08:59:24.446762 | orchestrator | 2026-04-16 08:59:24.446768 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-04-16 08:59:24.446775 | orchestrator | 2026-04-16 08:59:24.446781 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-04-16 08:59:24.446788 | orchestrator | Thursday 16 April 2026 08:59:19 +0000 (0:00:02.115) 1:13:26.293 ******** 2026-04-16 08:59:24.446798 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446805 | orchestrator | ok: [testbed-node-1] 2026-04-16 08:59:24.446812 | orchestrator | ok: [testbed-node-2] 2026-04-16 08:59:24.446818 | orchestrator | 2026-04-16 08:59:24.446825 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-04-16 08:59:24.446831 | orchestrator | Thursday 16 April 2026 08:59:20 +0000 (0:00:01.397) 1:13:27.691 ******** 2026-04-16 08:59:24.446838 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446844 | orchestrator | 2026-04-16 08:59:24.446851 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-04-16 08:59:24.446857 | orchestrator | Thursday 16 April 2026 08:59:23 +0000 (0:00:02.268) 1:13:29.959 ******** 2026-04-16 08:59:24.446864 | orchestrator | ok: [testbed-node-0] 2026-04-16 08:59:24.446870 | orchestrator | 2026-04-16 08:59:24.446877 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 08:59:24.446884 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 08:59:24.446892 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-04-16 08:59:24.446899 | orchestrator | testbed-node-0 : ok=248  changed=19  unreachable=0 failed=0 skipped=369  rescued=0 ignored=0 2026-04-16 08:59:24.446906 | orchestrator | testbed-node-1 : ok=191  changed=14  unreachable=0 failed=0 skipped=343  rescued=0 ignored=0 2026-04-16 08:59:24.446916 | orchestrator | testbed-node-2 : ok=196  changed=14  unreachable=0 failed=0 skipped=344  rescued=0 ignored=0 2026-04-16 08:59:27.186657 | orchestrator | testbed-node-3 : ok=316  changed=21  unreachable=0 failed=0 skipped=355  rescued=0 ignored=0 2026-04-16 08:59:27.186786 | orchestrator | testbed-node-4 : ok=302  changed=17  unreachable=0 failed=0 skipped=338  rescued=0 ignored=0 2026-04-16 08:59:27.186803 | orchestrator | testbed-node-5 : ok=309  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-04-16 08:59:27.186815 | orchestrator | 2026-04-16 08:59:27.186826 | orchestrator | 2026-04-16 08:59:27.186837 | orchestrator | 2026-04-16 08:59:27.186849 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 08:59:27.186893 | orchestrator | Thursday 16 April 2026 08:59:26 +0000 (0:00:03.417) 1:13:33.377 ******** 2026-04-16 08:59:27.186904 | orchestrator | =============================================================================== 2026-04-16 08:59:27.186915 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 76.47s 2026-04-16 08:59:27.186926 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 75.95s 2026-04-16 08:59:27.186937 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.13s 2026-04-16 08:59:27.186948 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.89s 2026-04-16 08:59:27.186959 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.76s 2026-04-16 08:59:27.186969 | orchestrator | Gather and delegate facts ---------------------------------------------- 31.23s 2026-04-16 08:59:27.186980 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 29.46s 2026-04-16 08:59:27.186991 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 27.24s 2026-04-16 08:59:27.187001 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 23.17s 2026-04-16 08:59:27.187012 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.99s 2026-04-16 08:59:27.187022 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.91s 2026-04-16 08:59:27.187033 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 18.05s 2026-04-16 08:59:27.187044 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.31s 2026-04-16 08:59:27.187055 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.08s 2026-04-16 08:59:27.187065 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.20s 2026-04-16 08:59:27.187076 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 13.08s 2026-04-16 08:59:27.187086 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.70s 2026-04-16 08:59:27.187097 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.98s 2026-04-16 08:59:27.187108 | orchestrator | Stop ceph mon ---------------------------------------------------------- 11.78s 2026-04-16 08:59:27.187119 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.44s 2026-04-16 08:59:27.364400 | orchestrator | + osism apply cephclient 2026-04-16 08:59:28.654850 | orchestrator | 2026-04-16 08:59:28 | INFO  | Prepare task for execution of cephclient. 2026-04-16 08:59:28.718527 | orchestrator | 2026-04-16 08:59:28 | INFO  | Task b6628fc3-a344-4364-8aa3-5355886bce7c (cephclient) was prepared for execution. 2026-04-16 08:59:28.718638 | orchestrator | 2026-04-16 08:59:28 | INFO  | It takes a moment until task b6628fc3-a344-4364-8aa3-5355886bce7c (cephclient) has been started and output is visible here. 2026-04-16 08:59:55.401739 | orchestrator | 2026-04-16 08:59:55.401859 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-04-16 08:59:55.401875 | orchestrator | 2026-04-16 08:59:55.401888 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-04-16 08:59:55.401900 | orchestrator | Thursday 16 April 2026 08:59:34 +0000 (0:00:01.868) 0:00:01.868 ******** 2026-04-16 08:59:55.401912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-04-16 08:59:55.401925 | orchestrator | 2026-04-16 08:59:55.401936 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-04-16 08:59:55.401947 | orchestrator | Thursday 16 April 2026 08:59:36 +0000 (0:00:01.812) 0:00:03.680 ******** 2026-04-16 08:59:55.401960 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-04-16 08:59:55.401971 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-04-16 08:59:55.401984 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-04-16 08:59:55.402084 | orchestrator | 2026-04-16 08:59:55.402098 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-04-16 08:59:55.402109 | orchestrator | Thursday 16 April 2026 08:59:38 +0000 (0:00:02.508) 0:00:06.189 ******** 2026-04-16 08:59:55.402121 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-04-16 08:59:55.402131 | orchestrator | 2026-04-16 08:59:55.402142 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-04-16 08:59:55.402153 | orchestrator | Thursday 16 April 2026 08:59:40 +0000 (0:00:01.974) 0:00:08.164 ******** 2026-04-16 08:59:55.402164 | orchestrator | ok: [testbed-manager] 2026-04-16 08:59:55.402175 | orchestrator | 2026-04-16 08:59:55.402186 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-04-16 08:59:55.402196 | orchestrator | Thursday 16 April 2026 08:59:42 +0000 (0:00:01.798) 0:00:09.962 ******** 2026-04-16 08:59:55.402207 | orchestrator | ok: [testbed-manager] 2026-04-16 08:59:55.402218 | orchestrator | 2026-04-16 08:59:55.402228 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-04-16 08:59:55.402270 | orchestrator | Thursday 16 April 2026 08:59:44 +0000 (0:00:01.786) 0:00:11.749 ******** 2026-04-16 08:59:55.402283 | orchestrator | ok: [testbed-manager] 2026-04-16 08:59:55.402295 | orchestrator | 2026-04-16 08:59:55.402307 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-04-16 08:59:55.402320 | orchestrator | Thursday 16 April 2026 08:59:46 +0000 (0:00:02.187) 0:00:13.937 ******** 2026-04-16 08:59:55.402333 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-04-16 08:59:55.402346 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-04-16 08:59:55.402359 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-04-16 08:59:55.402372 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-04-16 08:59:55.402384 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-04-16 08:59:55.402396 | orchestrator | 2026-04-16 08:59:55.402409 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-04-16 08:59:55.402421 | orchestrator | Thursday 16 April 2026 08:59:51 +0000 (0:00:04.797) 0:00:18.734 ******** 2026-04-16 08:59:55.402433 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-04-16 08:59:55.402446 | orchestrator | 2026-04-16 08:59:55.402458 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-04-16 08:59:55.402471 | orchestrator | Thursday 16 April 2026 08:59:52 +0000 (0:00:01.404) 0:00:20.139 ******** 2026-04-16 08:59:55.402482 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:55.402493 | orchestrator | 2026-04-16 08:59:55.402504 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-04-16 08:59:55.402514 | orchestrator | Thursday 16 April 2026 08:59:53 +0000 (0:00:01.090) 0:00:21.230 ******** 2026-04-16 08:59:55.402526 | orchestrator | skipping: [testbed-manager] 2026-04-16 08:59:55.402537 | orchestrator | 2026-04-16 08:59:55.402547 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 08:59:55.402559 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 08:59:55.402570 | orchestrator | 2026-04-16 08:59:55.402581 | orchestrator | 2026-04-16 08:59:55.402592 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 08:59:55.402603 | orchestrator | Thursday 16 April 2026 08:59:55 +0000 (0:00:01.463) 0:00:22.693 ******** 2026-04-16 08:59:55.402613 | orchestrator | =============================================================================== 2026-04-16 08:59:55.402624 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.80s 2026-04-16 08:59:55.402635 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.51s 2026-04-16 08:59:55.402645 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.19s 2026-04-16 08:59:55.402656 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.97s 2026-04-16 08:59:55.402681 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.81s 2026-04-16 08:59:55.402692 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.80s 2026-04-16 08:59:55.402703 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.79s 2026-04-16 08:59:55.402714 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.46s 2026-04-16 08:59:55.402725 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.40s 2026-04-16 08:59:55.402735 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.09s 2026-04-16 08:59:55.568343 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-16 08:59:55.568437 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-04-16 08:59:55.575216 | orchestrator | + set -e 2026-04-16 08:59:55.575361 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 08:59:55.575375 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 08:59:55.575385 | orchestrator | ++ INTERACTIVE=false 2026-04-16 08:59:55.575481 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 08:59:55.575496 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 08:59:55.575505 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 08:59:55.575513 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 08:59:55.575522 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 08:59:55.575531 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 08:59:55.575540 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 08:59:55.575549 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 08:59:55.575559 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 08:59:55.575567 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 08:59:55.575576 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 08:59:55.575585 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 08:59:55.575594 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 08:59:55.575603 | orchestrator | ++ export ARA=false 2026-04-16 08:59:55.575611 | orchestrator | ++ ARA=false 2026-04-16 08:59:55.575620 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 08:59:55.575629 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 08:59:55.575637 | orchestrator | ++ export TEMPEST=false 2026-04-16 08:59:55.575646 | orchestrator | ++ TEMPEST=false 2026-04-16 08:59:55.575655 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 08:59:55.575663 | orchestrator | ++ IS_ZUUL=true 2026-04-16 08:59:55.575676 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 08:59:55.575690 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 08:59:55.575704 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 08:59:55.575718 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 08:59:55.575731 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 08:59:55.575744 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 08:59:55.575770 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 08:59:55.575785 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 08:59:55.575799 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 08:59:55.575812 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 08:59:55.575826 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-16 08:59:55.575840 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-16 08:59:55.575854 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 08:59:55.577072 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 08:59:55.583347 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-16 08:59:55.583419 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-16 08:59:55.583439 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-16 08:59:55.583455 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-04-16 09:00:04.071539 | orchestrator | 2026-04-16 09:00:04 | ERROR  | Unable to get ansible vault password 2026-04-16 09:00:04.072585 | orchestrator | 2026-04-16 09:00:04 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 09:00:04.072654 | orchestrator | 2026-04-16 09:00:04 | ERROR  | Dropping encrypted entries 2026-04-16 09:00:04.106067 | orchestrator | 2026-04-16 09:00:04 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 09:00:04.107725 | orchestrator | 2026-04-16 09:00:04 | INFO  | Kolla configuration check passed 2026-04-16 09:00:04.304191 | orchestrator | 2026-04-16 09:00:04 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-04-16 09:00:04.324447 | orchestrator | 2026-04-16 09:00:04 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-04-16 09:00:04.554652 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-16 09:00:10.592344 | orchestrator | 2026-04-16 09:00:10 | ERROR  | Unable to get ansible vault password 2026-04-16 09:00:10.593451 | orchestrator | 2026-04-16 09:00:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 09:00:10.593497 | orchestrator | 2026-04-16 09:00:10 | ERROR  | Dropping encrypted entries 2026-04-16 09:00:10.627654 | orchestrator | 2026-04-16 09:00:10 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 09:00:10.772958 | orchestrator | 2026-04-16 09:00:10 | INFO  | Found 207 classic queue(s) in vhost '/': 2026-04-16 09:00:10.773084 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-04-16 09:00:10.773102 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-04-16 09:00:10.773115 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-04-16 09:00:10.773127 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-04-16 09:00:10.773139 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - barbican.workers_fanout_01a1594acb0a4e108fce6c1a24caae87 (vhost: /, messages: 0) 2026-04-16 09:00:10.773155 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - barbican.workers_fanout_1c9378ba6dd34d2faf560d3b20da1a5f (vhost: /, messages: 0) 2026-04-16 09:00:10.773527 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - barbican.workers_fanout_ad5b21880f6d4748aba13eb5cdf2ce22 (vhost: /, messages: 0) 2026-04-16 09:00:10.773569 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-04-16 09:00:10.773590 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central (vhost: /, messages: 0) 2026-04-16 09:00:10.773609 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.773629 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.773668 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.774378 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central_fanout_571b4c16c5e1494c8fcfab249e922ed0 (vhost: /, messages: 0) 2026-04-16 09:00:10.774454 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central_fanout_822651294e5549c4a7d587da19ab6f30 (vhost: /, messages: 0) 2026-04-16 09:00:10.774465 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central_fanout_86ea759e747d4cee902d8bd08fee3b92 (vhost: /, messages: 0) 2026-04-16 09:00:10.774473 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central_fanout_88be1619eb3440ee80d4bc05a6ffb25a (vhost: /, messages: 0) 2026-04-16 09:00:10.774479 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central_fanout_a69b7fa6dbba40b88522b311f1859f53 (vhost: /, messages: 0) 2026-04-16 09:00:10.774486 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - central_fanout_ac64ee17198742c7ab2bf332922a2deb (vhost: /, messages: 0) 2026-04-16 09:00:10.774492 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-04-16 09:00:10.774500 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.774596 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.774608 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.775054 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup_fanout_2e0c94c6bc9f4cdd8c6bbb92fcace580 (vhost: /, messages: 0) 2026-04-16 09:00:10.775126 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup_fanout_577697a5aa2147cda747387e4be46170 (vhost: /, messages: 0) 2026-04-16 09:00:10.775141 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-backup_fanout_760a0376a55849109dfa4ff7b07d5a75 (vhost: /, messages: 0) 2026-04-16 09:00:10.775154 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-04-16 09:00:10.775479 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.775503 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.775515 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.775527 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler_fanout_69b61f48ae9d4cceacefb818129cd9ff (vhost: /, messages: 0) 2026-04-16 09:00:10.775539 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler_fanout_7cc6ad2667e84fe0afbfc91969c099e1 (vhost: /, messages: 0) 2026-04-16 09:00:10.775550 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-scheduler_fanout_cc430cee73ea4c92a73c4c7a26af9054 (vhost: /, messages: 0) 2026-04-16 09:00:10.775562 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-04-16 09:00:10.775716 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-04-16 09:00:10.775734 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.775823 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_1ae1b26486194cfbae9a834e1fc89e01 (vhost: /, messages: 0) 2026-04-16 09:00:10.775844 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-04-16 09:00:10.776326 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.776679 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_790bcf290d374560a6eb35221ae9f4e9 (vhost: /, messages: 0) 2026-04-16 09:00:10.776703 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-04-16 09:00:10.776723 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.776743 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_b9b26489e16a4659a6d02d8a545c0c2e (vhost: /, messages: 0) 2026-04-16 09:00:10.776779 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume_fanout_7f1161d345ed46299fb8a0f8b80becbc (vhost: /, messages: 0) 2026-04-16 09:00:10.776796 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume_fanout_f76e5372328b48d986b059432ef0cba8 (vhost: /, messages: 0) 2026-04-16 09:00:10.776815 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - cinder-volume_fanout_f87da39d073145aea8e8570819785389 (vhost: /, messages: 0) 2026-04-16 09:00:10.777272 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute (vhost: /, messages: 0) 2026-04-16 09:00:10.777305 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-04-16 09:00:10.777317 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-04-16 09:00:10.777328 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-04-16 09:00:10.777339 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute_fanout_33473480a92147c784bb853536c224c8 (vhost: /, messages: 0) 2026-04-16 09:00:10.777351 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute_fanout_a3561eb6654b45c3b9097bfa05912a93 (vhost: /, messages: 0) 2026-04-16 09:00:10.777575 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - compute_fanout_e1c309845d604a4bab2dce7242aa583e (vhost: /, messages: 0) 2026-04-16 09:00:10.777597 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor (vhost: /, messages: 0) 2026-04-16 09:00:10.778304 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.778398 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.778413 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.778426 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor_fanout_48060eaadec34d77a4f7855093ec58f7 (vhost: /, messages: 0) 2026-04-16 09:00:10.778528 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor_fanout_63a6bbd809724d6ca3ae18e73237d87e (vhost: /, messages: 0) 2026-04-16 09:00:10.778546 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor_fanout_695b438d2d3d4e9d835e46d975bedf7b (vhost: /, messages: 0) 2026-04-16 09:00:10.778557 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor_fanout_6f4bb8b4325648fc94785d02a9396c04 (vhost: /, messages: 0) 2026-04-16 09:00:10.778568 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor_fanout_6f4f004cd70947f0acc9c16713dd0ce0 (vhost: /, messages: 0) 2026-04-16 09:00:10.778635 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - conductor_fanout_80c1913e636b42fc95bde77d14b470ff (vhost: /, messages: 0) 2026-04-16 09:00:10.778650 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - event.sample (vhost: /, messages: 3) 2026-04-16 09:00:10.778665 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-16 09:00:10.779217 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor.egjvi5e4un6c (vhost: /, messages: 0) 2026-04-16 09:00:10.779401 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor.eyrmbnnnbzyv (vhost: /, messages: 0) 2026-04-16 09:00:10.779421 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor.xdghpoj555ep (vhost: /, messages: 0) 2026-04-16 09:00:10.779433 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_15aa7d26dabb429bbc34c1d5ea07ba13 (vhost: /, messages: 0) 2026-04-16 09:00:10.779444 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_4853ffdfb8914945815a21ec5936502f (vhost: /, messages: 0) 2026-04-16 09:00:10.779454 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_61fd3efe2ab5476d96502b6dba978c04 (vhost: /, messages: 0) 2026-04-16 09:00:10.779472 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_64eace644ebd4769beed7b389a18cf01 (vhost: /, messages: 0) 2026-04-16 09:00:10.779482 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_893062836498442b9d984d31d217f88c (vhost: /, messages: 0) 2026-04-16 09:00:10.779714 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_8c67448e4ce143caaf9be291bf1729b2 (vhost: /, messages: 0) 2026-04-16 09:00:10.779731 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_a08dee36b3e74510addd91c9642fbba5 (vhost: /, messages: 0) 2026-04-16 09:00:10.779741 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_d0ec7e6dd39d428d94bdf8ab9c061905 (vhost: /, messages: 0) 2026-04-16 09:00:10.779761 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - magnum-conductor_fanout_fb3659050193464d936ffb2a57fd1207 (vhost: /, messages: 0) 2026-04-16 09:00:10.779956 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-04-16 09:00:10.779972 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.779983 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.780489 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.780508 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data_fanout_3a40081d1de247c1a41cb249aed94faa (vhost: /, messages: 0) 2026-04-16 09:00:10.780519 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data_fanout_3e6591f82a3c422da77f940debe72f63 (vhost: /, messages: 0) 2026-04-16 09:00:10.780623 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-data_fanout_5cd55cb2763248c881affa7acd0aa966 (vhost: /, messages: 0) 2026-04-16 09:00:10.780640 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-04-16 09:00:10.780662 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.780677 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.780693 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.780927 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler_fanout_542fa9ad4e17421ba96a0542bc13f2df (vhost: /, messages: 0) 2026-04-16 09:00:10.781053 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler_fanout_d4771de8743a4a34a1910707f0f8c214 (vhost: /, messages: 0) 2026-04-16 09:00:10.781077 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-scheduler_fanout_ffb8049bd621430dbc0cb2f9993503d5 (vhost: /, messages: 0) 2026-04-16 09:00:10.781094 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-04-16 09:00:10.781114 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-04-16 09:00:10.781221 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-04-16 09:00:10.781382 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-04-16 09:00:10.781397 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share_fanout_bc014dd0430d440da80eab884cbe35f2 (vhost: /, messages: 0) 2026-04-16 09:00:10.781407 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share_fanout_d0e66675da8949828cdf192119345c83 (vhost: /, messages: 0) 2026-04-16 09:00:10.781422 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - manila-share_fanout_e50f5a12b70f4b5282d4b61e530834ed (vhost: /, messages: 0) 2026-04-16 09:00:10.781561 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-04-16 09:00:10.781592 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-04-16 09:00:10.781741 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-04-16 09:00:10.781757 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-04-16 09:00:10.781767 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-04-16 09:00:10.781777 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-04-16 09:00:10.781786 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-04-16 09:00:10.782136 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-04-16 09:00:10.782170 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.782183 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.782193 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.782445 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2_fanout_299216b097db4dc1b89539c1b2ac7697 (vhost: /, messages: 0) 2026-04-16 09:00:10.782467 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2_fanout_828c1c5862304249a0e4cfbf4de5f1d1 (vhost: /, messages: 0) 2026-04-16 09:00:10.782547 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - octavia_provisioning_v2_fanout_fae9ed2cfb524fb8b385e92e71bd863f (vhost: /, messages: 0) 2026-04-16 09:00:10.782608 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer (vhost: /, messages: 0) 2026-04-16 09:00:10.782621 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.782998 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.783023 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.783079 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer_fanout_033d5caf828c4c42a303069060fc965a (vhost: /, messages: 0) 2026-04-16 09:00:10.783093 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer_fanout_09d0aa22620e4f1c835991367424e85f (vhost: /, messages: 0) 2026-04-16 09:00:10.783103 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer_fanout_4a03b67430104baba894f0ac1546fd58 (vhost: /, messages: 0) 2026-04-16 09:00:10.783118 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer_fanout_66ea2503c56d4298901d4be64d5667eb (vhost: /, messages: 0) 2026-04-16 09:00:10.783486 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer_fanout_834217f5d2eb4b54830711ad054eb938 (vhost: /, messages: 0) 2026-04-16 09:00:10.783509 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - producer_fanout_dbd8f88339e74fb3b2cbbc712c682b19 (vhost: /, messages: 0) 2026-04-16 09:00:10.783517 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-04-16 09:00:10.783621 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.783635 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.783644 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.783842 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_10c182c349bf48c0b965b2bc57f1c6fc (vhost: /, messages: 0) 2026-04-16 09:00:10.783867 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_2c18d0e9cef844feadcf97795711bbad (vhost: /, messages: 0) 2026-04-16 09:00:10.783875 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_46558a04ed9340c2b4f33c3a5711e05d (vhost: /, messages: 0) 2026-04-16 09:00:10.784033 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_5e97fa8b4c2647c18ba15984c99d63be (vhost: /, messages: 0) 2026-04-16 09:00:10.784274 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_671c9a00b4bd4a7e81f49e9b12de7cbc (vhost: /, messages: 0) 2026-04-16 09:00:10.784289 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_994194c972254233a93256f57cc25fcb (vhost: /, messages: 0) 2026-04-16 09:00:10.784297 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_b7d2083e209e461ea1420564ffb913ec (vhost: /, messages: 0) 2026-04-16 09:00:10.785307 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_bee8fa4f561046ac8520298a2a905bbe (vhost: /, messages: 0) 2026-04-16 09:00:10.785341 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-plugin_fanout_c4079d180e1f4e209433924554aef2e5 (vhost: /, messages: 0) 2026-04-16 09:00:10.785350 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-04-16 09:00:10.785359 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.785367 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.785375 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.785383 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_158ed6da7565461496f422cf7f4020e6 (vhost: /, messages: 0) 2026-04-16 09:00:10.785391 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_333ba40a710c4fec8595de8c59a791bb (vhost: /, messages: 0) 2026-04-16 09:00:10.785407 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_3861bff2adeb4299ae49cb8d3bc1b7f1 (vhost: /, messages: 0) 2026-04-16 09:00:10.785416 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_4f07ae52ec7e491a8ea6c02cd0e11661 (vhost: /, messages: 0) 2026-04-16 09:00:10.785424 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_603193f7795c426390f2f9b0be1a7e1d (vhost: /, messages: 0) 2026-04-16 09:00:10.785566 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_6625520f91ef42ef94406e66b98a9ad2 (vhost: /, messages: 0) 2026-04-16 09:00:10.785631 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_6cd23d6b3b9c46ce8351afb9c3628194 (vhost: /, messages: 0) 2026-04-16 09:00:10.785642 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_8eba655930b140c79c387ffa27c407a7 (vhost: /, messages: 0) 2026-04-16 09:00:10.785838 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_a5d3309a6e184ba6ac39625f2f3e54ff (vhost: /, messages: 0) 2026-04-16 09:00:10.785854 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_a7032cdd56084ad2a652ce0c64782a8b (vhost: /, messages: 0) 2026-04-16 09:00:10.785862 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_a88f6165aa024d409f5e6e92ae31aa16 (vhost: /, messages: 0) 2026-04-16 09:00:10.785870 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_c1d3f2ecea9e407ea9554406f0479f81 (vhost: /, messages: 0) 2026-04-16 09:00:10.786189 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_c71152494818447aadde708cf0b7c634 (vhost: /, messages: 0) 2026-04-16 09:00:10.786322 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_c8b5b1b5821f4ff6aebc9fb2d6fa7083 (vhost: /, messages: 0) 2026-04-16 09:00:10.786335 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_ccf71a3878f541b7ac840e306a59103e (vhost: /, messages: 0) 2026-04-16 09:00:10.786347 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_def17b3edc944b7eacb825601a3c83d4 (vhost: /, messages: 0) 2026-04-16 09:00:10.786670 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_e063ba34cb1a408c8088abe13da81100 (vhost: /, messages: 0) 2026-04-16 09:00:10.786685 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-reports-plugin_fanout_e5dc105ad963459ca545102639782fd3 (vhost: /, messages: 0) 2026-04-16 09:00:10.786778 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-04-16 09:00:10.787507 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.787579 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.787600 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.787625 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_16d8178fc33945dd88478fc837dce780 (vhost: /, messages: 0) 2026-04-16 09:00:10.787767 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_1d5c0049778b4efcb38f9aa614d0791e (vhost: /, messages: 0) 2026-04-16 09:00:10.787784 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_2632727300c5442bbcd1ac21fc217e17 (vhost: /, messages: 0) 2026-04-16 09:00:10.788154 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_42b4713207324ff69f9355cafeaa9366 (vhost: /, messages: 0) 2026-04-16 09:00:10.788185 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_42b9a75b687c4d5cb31648dbe77808b6 (vhost: /, messages: 0) 2026-04-16 09:00:10.788398 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_4a52e0c499094c43bf46d15224a56e5d (vhost: /, messages: 0) 2026-04-16 09:00:10.788685 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_8c76527b8215480f9e71f0d915b540f0 (vhost: /, messages: 0) 2026-04-16 09:00:10.788761 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_a6e287bf969c4332b21f039f1991ba27 (vhost: /, messages: 0) 2026-04-16 09:00:10.789048 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - q-server-resource-versions_fanout_b18e182b50a441f5b858eff4c9ced3a0 (vhost: /, messages: 0) 2026-04-16 09:00:10.789294 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_09cc06ab9a634720a0c67989cfa9e780 (vhost: /, messages: 0) 2026-04-16 09:00:10.789327 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_3a5db4267c824c5fbff9becb97d26ee1 (vhost: /, messages: 0) 2026-04-16 09:00:10.789423 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_53798ce5a57d4eaa9f5f20c99536db72 (vhost: /, messages: 0) 2026-04-16 09:00:10.789743 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_5bbaa30339f04c5da005f0ec75a51bdd (vhost: /, messages: 0) 2026-04-16 09:00:10.789763 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_64dd6cae9d7441d1a9a0c47383adcb01 (vhost: /, messages: 0) 2026-04-16 09:00:10.789993 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_66b42ce502994fed84e0f9f71dcdc866 (vhost: /, messages: 0) 2026-04-16 09:00:10.790274 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_6a067d1a078c4d31a5c490c7b0c5cf1a (vhost: /, messages: 0) 2026-04-16 09:00:10.790294 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_723b6277b9d64a10af85734a9b50883d (vhost: /, messages: 0) 2026-04-16 09:00:10.790302 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_79356e0a2833474e8c25c509bff7cdf1 (vhost: /, messages: 0) 2026-04-16 09:00:10.790451 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_9330759bbb01411ca3b6689b4e1e4ab5 (vhost: /, messages: 0) 2026-04-16 09:00:10.790671 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_9eb6a393ace44913b2460729edd636b3 (vhost: /, messages: 0) 2026-04-16 09:00:10.790687 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_b1f01ead0b924c52a5fb6730cdf69613 (vhost: /, messages: 0) 2026-04-16 09:00:10.791031 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_b5c3033668474ec09b4cb6ba46f46bbd (vhost: /, messages: 0) 2026-04-16 09:00:10.791055 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_b87202d9c37b49a5b2cd65ed1f887dc0 (vhost: /, messages: 0) 2026-04-16 09:00:10.791462 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_cb001547a8ab4e26ae361f60fcc24efa (vhost: /, messages: 0) 2026-04-16 09:00:10.791487 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_cb25daf129e44a72a7b3a4af8c6c903b (vhost: /, messages: 0) 2026-04-16 09:00:10.791827 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_d045633add204771a575f97eb4f6c5b5 (vhost: /, messages: 0) 2026-04-16 09:00:10.791904 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_d2679730a7cc4176b3414076979874df (vhost: /, messages: 0) 2026-04-16 09:00:10.791914 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - reply_e09d6d4b49414541986564190ed84971 (vhost: /, messages: 0) 2026-04-16 09:00:10.791926 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-04-16 09:00:10.792095 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.792176 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.792542 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.792775 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler_fanout_043abd48dc424ed1a6d7fcfdfe920d53 (vhost: /, messages: 0) 2026-04-16 09:00:10.792800 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler_fanout_2a0469c7cf724ce3b1050bf41d6c7854 (vhost: /, messages: 0) 2026-04-16 09:00:10.792953 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler_fanout_64a1743b9a5a40e086b5d38cc91bc67d (vhost: /, messages: 0) 2026-04-16 09:00:10.793256 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler_fanout_c70849d04bde4e1fbe119eaa9203912e (vhost: /, messages: 0) 2026-04-16 09:00:10.793276 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - scheduler_fanout_f82e8125a4204d78bc83e52c04458a43 (vhost: /, messages: 0) 2026-04-16 09:00:10.793603 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker (vhost: /, messages: 0) 2026-04-16 09:00:10.793693 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-04-16 09:00:10.793865 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-04-16 09:00:10.793878 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-04-16 09:00:10.794259 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker_fanout_428c4df88e5349f1b6a84c9fe682f270 (vhost: /, messages: 0) 2026-04-16 09:00:10.794318 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker_fanout_7c3a992d2f3c4ddf9a25929c2a5eb177 (vhost: /, messages: 0) 2026-04-16 09:00:10.794613 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker_fanout_902a31640a7e43248db8a05145485a3b (vhost: /, messages: 0) 2026-04-16 09:00:10.794634 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker_fanout_b34f9629ef144cf9934da7e17eb963df (vhost: /, messages: 0) 2026-04-16 09:00:10.794642 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker_fanout_d32e450c72c243319e57e7ac7b58c42e (vhost: /, messages: 0) 2026-04-16 09:00:10.795139 | orchestrator | 2026-04-16 09:00:10 | INFO  |  - worker_fanout_e46323bd5ac14819960ca74faf4c4316 (vhost: /, messages: 0) 2026-04-16 09:00:11.006884 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-16 09:00:16.986384 | orchestrator | 2026-04-16 09:00:16 | ERROR  | Unable to get ansible vault password 2026-04-16 09:00:16.986508 | orchestrator | 2026-04-16 09:00:16 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 09:00:16.986534 | orchestrator | 2026-04-16 09:00:16 | ERROR  | Dropping encrypted entries 2026-04-16 09:00:17.019514 | orchestrator | 2026-04-16 09:00:17 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 09:00:17.049446 | orchestrator | 2026-04-16 09:00:17 | INFO  | Found 46 exchange(s) in vhost '/': 2026-04-16 09:00:17.049671 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - aodh (type: topic, transient) 2026-04-16 09:00:17.049701 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - barbican.workers_fanout (type: fanout, transient) 2026-04-16 09:00:17.049811 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - ceilometer (type: topic, transient) 2026-04-16 09:00:17.049834 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - central_fanout (type: fanout, transient) 2026-04-16 09:00:17.049852 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder (type: topic, transient) 2026-04-16 09:00:17.049871 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder-backup_fanout (type: fanout, transient) 2026-04-16 09:00:17.049888 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder-scheduler_fanout (type: fanout, transient) 2026-04-16 09:00:17.050003 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout (type: fanout, transient) 2026-04-16 09:00:17.050088 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout (type: fanout, transient) 2026-04-16 09:00:17.050109 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout (type: fanout, transient) 2026-04-16 09:00:17.050227 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - cinder-volume_fanout (type: fanout, transient) 2026-04-16 09:00:17.050274 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - compute_fanout (type: fanout, transient) 2026-04-16 09:00:17.050293 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - conductor_fanout (type: fanout, transient) 2026-04-16 09:00:17.050490 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - designate (type: topic, transient) 2026-04-16 09:00:17.050604 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - dns (type: topic, transient) 2026-04-16 09:00:17.050625 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - glance (type: topic, transient) 2026-04-16 09:00:17.050664 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - heat (type: topic, transient) 2026-04-16 09:00:17.050683 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - ironic (type: topic, transient) 2026-04-16 09:00:17.050741 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - keystone (type: topic, transient) 2026-04-16 09:00:17.050761 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - l3_agent_fanout (type: fanout, transient) 2026-04-16 09:00:17.050885 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - magnum (type: topic, transient) 2026-04-16 09:00:17.050903 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - magnum-conductor_fanout (type: fanout, transient) 2026-04-16 09:00:17.051059 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - manila-data_fanout (type: fanout, transient) 2026-04-16 09:00:17.051087 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - manila-scheduler_fanout (type: fanout, transient) 2026-04-16 09:00:17.051125 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - manila-share_fanout (type: fanout, transient) 2026-04-16 09:00:17.051297 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - neutron (type: topic, transient) 2026-04-16 09:00:17.051327 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - neutron-vo-Network-1.1_fanout (type: fanout, transient) 2026-04-16 09:00:17.051365 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - neutron-vo-Port-1.10_fanout (type: fanout, transient) 2026-04-16 09:00:17.051386 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - neutron-vo-SecurityGroup-1.6_fanout (type: fanout, transient) 2026-04-16 09:00:17.051405 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - neutron-vo-SecurityGroupRule-1.3_fanout (type: fanout, transient) 2026-04-16 09:00:17.051424 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - neutron-vo-Subnet-1.2_fanout (type: fanout, transient) 2026-04-16 09:00:17.051443 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - nova (type: topic, transient) 2026-04-16 09:00:17.051461 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - octavia (type: topic, transient) 2026-04-16 09:00:17.051482 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - octavia_provisioning_v2_fanout (type: fanout, transient) 2026-04-16 09:00:17.051501 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - openstack (type: topic, transient) 2026-04-16 09:00:17.051520 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - producer_fanout (type: fanout, transient) 2026-04-16 09:00:17.051538 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - q-agent-notifier-port-update_fanout (type: fanout, transient) 2026-04-16 09:00:17.051558 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - q-agent-notifier-security_group-update_fanout (type: fanout, transient) 2026-04-16 09:00:17.051589 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - q-plugin_fanout (type: fanout, transient) 2026-04-16 09:00:17.051607 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - q-reports-plugin_fanout (type: fanout, transient) 2026-04-16 09:00:17.051624 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - q-server-resource-versions_fanout (type: fanout, transient) 2026-04-16 09:00:17.051642 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - scheduler_fanout (type: fanout, transient) 2026-04-16 09:00:17.051660 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - swift (type: topic, transient) 2026-04-16 09:00:17.051678 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - trove (type: topic, transient) 2026-04-16 09:00:17.051697 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - worker_fanout (type: fanout, transient) 2026-04-16 09:00:17.051716 | orchestrator | 2026-04-16 09:00:17 | INFO  |  - zaqar (type: topic, transient) 2026-04-16 09:00:17.253384 | orchestrator | + osism apply -a upgrade keystone 2026-04-16 09:00:18.496802 | orchestrator | 2026-04-16 09:00:18 | INFO  | Prepare task for execution of keystone. 2026-04-16 09:00:18.558742 | orchestrator | 2026-04-16 09:00:18 | INFO  | Task e6758827-3d70-4d09-bd44-3b322247d9a9 (keystone) was prepared for execution. 2026-04-16 09:00:18.558891 | orchestrator | 2026-04-16 09:00:18 | INFO  | It takes a moment until task e6758827-3d70-4d09-bd44-3b322247d9a9 (keystone) has been started and output is visible here. 2026-04-16 09:00:27.586641 | orchestrator | 2026-04-16 09:00:27.586786 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:00:27.586806 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:00:27.586817 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:00:27.586870 | orchestrator | 2026-04-16 09:00:27.586882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:00:27.586892 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:00:27.586902 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:00:27.586922 | orchestrator | Thursday 16 April 2026 09:00:23 +0000 (0:00:01.246) 0:00:01.246 ******** 2026-04-16 09:00:27.586932 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:00:27.586944 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:00:27.586953 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:00:27.586963 | orchestrator | 2026-04-16 09:00:27.586973 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:00:27.586983 | orchestrator | Thursday 16 April 2026 09:00:23 +0000 (0:00:00.833) 0:00:02.080 ******** 2026-04-16 09:00:27.586993 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-04-16 09:00:27.587003 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-04-16 09:00:27.587013 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-04-16 09:00:27.587023 | orchestrator | 2026-04-16 09:00:27.587033 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-04-16 09:00:27.587042 | orchestrator | 2026-04-16 09:00:27.587052 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 09:00:27.587062 | orchestrator | Thursday 16 April 2026 09:00:24 +0000 (0:00:00.738) 0:00:02.818 ******** 2026-04-16 09:00:27.587072 | orchestrator | included: /ansible/roles/keystone/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:00:27.587083 | orchestrator | 2026-04-16 09:00:27.587108 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-04-16 09:00:27.587120 | orchestrator | Thursday 16 April 2026 09:00:25 +0000 (0:00:01.024) 0:00:03.842 ******** 2026-04-16 09:00:27.587135 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:27.587150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:27.587204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:27.587220 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:27.587299 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:27.587315 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:27.587327 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:27.587347 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:27.587366 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:33.115447 | orchestrator | 2026-04-16 09:00:33.115575 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-04-16 09:00:33.115598 | orchestrator | Thursday 16 April 2026 09:00:27 +0000 (0:00:02.052) 0:00:05.894 ******** 2026-04-16 09:00:33.115615 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:33.115631 | orchestrator | 2026-04-16 09:00:33.115646 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-04-16 09:00:33.115663 | orchestrator | Thursday 16 April 2026 09:00:27 +0000 (0:00:00.109) 0:00:06.004 ******** 2026-04-16 09:00:33.115678 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:33.115694 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:00:33.115709 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:00:33.115725 | orchestrator | 2026-04-16 09:00:33.115740 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-04-16 09:00:33.115754 | orchestrator | Thursday 16 April 2026 09:00:28 +0000 (0:00:00.284) 0:00:06.288 ******** 2026-04-16 09:00:33.115768 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:00:33.115782 | orchestrator | 2026-04-16 09:00:33.115796 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 09:00:33.115810 | orchestrator | Thursday 16 April 2026 09:00:29 +0000 (0:00:01.038) 0:00:07.327 ******** 2026-04-16 09:00:33.115825 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:00:33.115840 | orchestrator | 2026-04-16 09:00:33.115855 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-04-16 09:00:33.115889 | orchestrator | Thursday 16 April 2026 09:00:30 +0000 (0:00:00.995) 0:00:08.322 ******** 2026-04-16 09:00:33.115910 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:33.115956 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:33.116001 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:33.116021 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:33.116047 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:33.116074 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:33.116092 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:33.116110 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:33.116126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:33.116142 | orchestrator | 2026-04-16 09:00:33.116169 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-04-16 09:00:34.146113 | orchestrator | Thursday 16 April 2026 09:00:33 +0000 (0:00:03.026) 0:00:11.348 ******** 2026-04-16 09:00:34.146302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:34.146351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:34.146366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:34.146378 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:34.146392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:34.146424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:34.146437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:34.146449 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:00:34.146466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:34.146486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:34.146499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:34.146510 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:00:34.146522 | orchestrator | 2026-04-16 09:00:34.146534 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-04-16 09:00:34.146545 | orchestrator | Thursday 16 April 2026 09:00:33 +0000 (0:00:00.779) 0:00:12.128 ******** 2026-04-16 09:00:34.146564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:35.919563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:35.919677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:35.919690 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:35.919700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:35.919709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:35.919716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:35.919722 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:00:35.919749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:35.919771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:35.919782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:35.919793 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:00:35.919804 | orchestrator | 2026-04-16 09:00:35.919816 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-04-16 09:00:35.919827 | orchestrator | Thursday 16 April 2026 09:00:34 +0000 (0:00:00.720) 0:00:12.849 ******** 2026-04-16 09:00:35.919839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:35.919902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:40.642382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:40.642477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:40.642492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:40.642500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:00:40.642510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:40.642555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:40.642570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:40.642580 | orchestrator | 2026-04-16 09:00:40.642590 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-04-16 09:00:40.642600 | orchestrator | Thursday 16 April 2026 09:00:37 +0000 (0:00:03.354) 0:00:16.204 ******** 2026-04-16 09:00:40.642609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:40.642618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:40.642627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:40.642648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:46.436651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:00:46.436771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:46.436790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:46.436805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:46.436843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:00:46.436856 | orchestrator | 2026-04-16 09:00:46.436870 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-04-16 09:00:46.436883 | orchestrator | Thursday 16 April 2026 09:00:43 +0000 (0:00:05.152) 0:00:21.356 ******** 2026-04-16 09:00:46.436894 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:00:46.436906 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:00:46.436917 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:00:46.436932 | orchestrator | 2026-04-16 09:00:46.436953 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-04-16 09:00:46.436974 | orchestrator | Thursday 16 April 2026 09:00:44 +0000 (0:00:01.417) 0:00:22.774 ******** 2026-04-16 09:00:46.436993 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:46.437035 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:00:46.437057 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:00:46.437076 | orchestrator | 2026-04-16 09:00:46.437118 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-04-16 09:00:46.437139 | orchestrator | Thursday 16 April 2026 09:00:45 +0000 (0:00:00.610) 0:00:23.384 ******** 2026-04-16 09:00:46.437160 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:46.437180 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:00:46.437201 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:00:46.437221 | orchestrator | 2026-04-16 09:00:46.437271 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-04-16 09:00:46.437293 | orchestrator | Thursday 16 April 2026 09:00:45 +0000 (0:00:00.333) 0:00:23.718 ******** 2026-04-16 09:00:46.437314 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:46.437334 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:00:46.437353 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:00:46.437371 | orchestrator | 2026-04-16 09:00:46.437391 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-04-16 09:00:46.437409 | orchestrator | Thursday 16 April 2026 09:00:46 +0000 (0:00:00.550) 0:00:24.268 ******** 2026-04-16 09:00:46.437431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:00:46.437456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:00:46.437510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:00:46.437530 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:00:46.437566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:01:02.784718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:01:02.784837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:01:02.784855 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:01:02.784872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:01:02.784913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:01:02.784925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:01:02.784937 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:01:02.784948 | orchestrator | 2026-04-16 09:01:02.784961 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-04-16 09:01:02.784973 | orchestrator | Thursday 16 April 2026 09:00:46 +0000 (0:00:00.628) 0:00:24.897 ******** 2026-04-16 09:01:02.784984 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:01:02.784996 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:01:02.785007 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:01:02.785018 | orchestrator | 2026-04-16 09:01:02.785043 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-04-16 09:01:02.785071 | orchestrator | Thursday 16 April 2026 09:00:46 +0000 (0:00:00.283) 0:00:25.180 ******** 2026-04-16 09:01:02.785083 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-16 09:01:02.785095 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-16 09:01:02.785106 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-04-16 09:01:02.785116 | orchestrator | 2026-04-16 09:01:02.785127 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-04-16 09:01:02.785138 | orchestrator | Thursday 16 April 2026 09:00:48 +0000 (0:00:01.822) 0:00:27.002 ******** 2026-04-16 09:01:02.785149 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:01:02.785159 | orchestrator | 2026-04-16 09:01:02.785170 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-04-16 09:01:02.785181 | orchestrator | Thursday 16 April 2026 09:00:49 +0000 (0:00:00.929) 0:00:27.932 ******** 2026-04-16 09:01:02.785192 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:01:02.785202 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:01:02.785288 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:01:02.785302 | orchestrator | 2026-04-16 09:01:02.785314 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-04-16 09:01:02.785337 | orchestrator | Thursday 16 April 2026 09:00:50 +0000 (0:00:00.536) 0:00:28.469 ******** 2026-04-16 09:01:02.785350 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 09:01:02.785363 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:01:02.785376 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 09:01:02.785389 | orchestrator | 2026-04-16 09:01:02.785402 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-04-16 09:01:02.785415 | orchestrator | Thursday 16 April 2026 09:00:51 +0000 (0:00:01.061) 0:00:29.531 ******** 2026-04-16 09:01:02.785427 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:01:02.785440 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:01:02.785453 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:01:02.785465 | orchestrator | 2026-04-16 09:01:02.785479 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-04-16 09:01:02.785492 | orchestrator | Thursday 16 April 2026 09:00:51 +0000 (0:00:00.286) 0:00:29.818 ******** 2026-04-16 09:01:02.785505 | orchestrator | ok: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-16 09:01:02.785518 | orchestrator | ok: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-16 09:01:02.785531 | orchestrator | ok: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-04-16 09:01:02.785544 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-16 09:01:02.785558 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-16 09:01:02.785571 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-04-16 09:01:02.785584 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-16 09:01:02.785596 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-16 09:01:02.785606 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-04-16 09:01:02.785617 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-16 09:01:02.785628 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-16 09:01:02.785639 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-04-16 09:01:02.785650 | orchestrator | ok: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-16 09:01:02.785660 | orchestrator | ok: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-16 09:01:02.785671 | orchestrator | ok: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-04-16 09:01:02.785682 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:01:02.785693 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:01:02.785704 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:01:02.785715 | orchestrator | ok: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:01:02.785726 | orchestrator | ok: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:01:02.785737 | orchestrator | ok: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:01:02.785748 | orchestrator | 2026-04-16 09:01:02.785759 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-04-16 09:01:02.785770 | orchestrator | Thursday 16 April 2026 09:01:00 +0000 (0:00:08.851) 0:00:38.669 ******** 2026-04-16 09:01:02.785780 | orchestrator | ok: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:01:02.785798 | orchestrator | ok: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:01:02.785809 | orchestrator | ok: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:01:02.785820 | orchestrator | ok: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:01:02.785839 | orchestrator | ok: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:01:07.523813 | orchestrator | ok: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:01:07.523913 | orchestrator | 2026-04-16 09:01:07.523926 | orchestrator | TASK [service-check-containers : keystone | Check containers] ****************** 2026-04-16 09:01:07.523939 | orchestrator | Thursday 16 April 2026 09:01:03 +0000 (0:00:02.862) 0:00:41.531 ******** 2026-04-16 09:01:07.524003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:01:07.524021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:01:07.524035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-04-16 09:01:07.524088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:01:07.524103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:01:07.524113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-04-16 09:01:07.524124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:01:07.524136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:01:07.524146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-04-16 09:01:07.524156 | orchestrator | 2026-04-16 09:01:07.524173 | orchestrator | TASK [service-check-containers : keystone | Notify handlers to restart containers] *** 2026-04-16 09:01:07.524183 | orchestrator | Thursday 16 April 2026 09:01:06 +0000 (0:00:03.470) 0:00:45.001 ******** 2026-04-16 09:01:07.524193 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:01:07.524204 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:01:07.524214 | orchestrator | } 2026-04-16 09:01:07.524224 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:01:07.524233 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:01:07.524243 | orchestrator | } 2026-04-16 09:01:07.524280 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:01:07.524290 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:01:07.524300 | orchestrator | } 2026-04-16 09:01:07.524310 | orchestrator | 2026-04-16 09:01:07.524320 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:01:07.524329 | orchestrator | Thursday 16 April 2026 09:01:07 +0000 (0:00:00.481) 0:00:45.483 ******** 2026-04-16 09:01:07.524354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:03:06.628090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:03:06.628232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:03:06.628246 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:03:06.628256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:03:06.628286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:03:06.628314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:03:06.628321 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:03:06.628345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-04-16 09:03:06.628352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-04-16 09:03:06.628359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-04-16 09:03:06.628370 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:03:06.628376 | orchestrator | 2026-04-16 09:03:06.628383 | orchestrator | TASK [keystone : Enable log_bin_trust_function_creators function] ************** 2026-04-16 09:03:06.628392 | orchestrator | Thursday 16 April 2026 09:01:08 +0000 (0:00:01.160) 0:00:46.644 ******** 2026-04-16 09:03:06.628398 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:06.628404 | orchestrator | 2026-04-16 09:03:06.628410 | orchestrator | TASK [keystone : Init keystone database upgrade] ******************************* 2026-04-16 09:03:06.628417 | orchestrator | Thursday 16 April 2026 09:01:10 +0000 (0:00:02.100) 0:00:48.744 ******** 2026-04-16 09:03:06.628423 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:03:06.628429 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:03:06.628435 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:06.628441 | orchestrator | 2026-04-16 09:03:06.628447 | orchestrator | TASK [keystone : Finish keystone database upgrade] ***************************** 2026-04-16 09:03:06.628454 | orchestrator | Thursday 16 April 2026 09:01:10 +0000 (0:00:00.457) 0:00:49.201 ******** 2026-04-16 09:03:06.628460 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:03:06.628467 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:03:06.628473 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:03:06.628479 | orchestrator | 2026-04-16 09:03:06.628485 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-16 09:03:06.628492 | orchestrator | Thursday 16 April 2026 09:01:11 +0000 (0:00:00.801) 0:00:50.003 ******** 2026-04-16 09:03:06.628498 | orchestrator | 2026-04-16 09:03:06.628504 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-16 09:03:06.628514 | orchestrator | Thursday 16 April 2026 09:01:11 +0000 (0:00:00.075) 0:00:50.079 ******** 2026-04-16 09:03:06.628520 | orchestrator | 2026-04-16 09:03:06.628526 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-04-16 09:03:06.628532 | orchestrator | Thursday 16 April 2026 09:01:11 +0000 (0:00:00.074) 0:00:50.153 ******** 2026-04-16 09:03:06.628539 | orchestrator | 2026-04-16 09:03:06.628544 | orchestrator | RUNNING HANDLER [keystone : Init keystone database upgrade] ******************** 2026-04-16 09:03:06.628550 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 09:03:06.628557 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 09:03:06.628569 | orchestrator | Thursday 16 April 2026 09:01:11 +0000 (0:00:00.072) 0:00:50.225 ******** 2026-04-16 09:03:06.628574 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:06.628580 | orchestrator | 2026-04-16 09:03:06.628586 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-04-16 09:03:06.628591 | orchestrator | Thursday 16 April 2026 09:02:15 +0000 (0:01:03.825) 0:01:54.051 ******** 2026-04-16 09:03:06.628597 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:06.628603 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:03:06.628610 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:03:06.628617 | orchestrator | 2026-04-16 09:03:06.628624 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-04-16 09:03:06.628636 | orchestrator | Thursday 16 April 2026 09:03:06 +0000 (0:00:50.805) 0:02:44.856 ******** 2026-04-16 09:03:47.028349 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:03:47.028492 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:47.028518 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:03:47.028538 | orchestrator | 2026-04-16 09:03:47.028560 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-04-16 09:03:47.028582 | orchestrator | Thursday 16 April 2026 09:03:18 +0000 (0:00:11.773) 0:02:56.630 ******** 2026-04-16 09:03:47.028638 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:47.028658 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:03:47.028676 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:03:47.028695 | orchestrator | 2026-04-16 09:03:47.028716 | orchestrator | RUNNING HANDLER [keystone : Finish keystone database upgrade] ****************** 2026-04-16 09:03:47.028736 | orchestrator | Thursday 16 April 2026 09:03:31 +0000 (0:00:12.616) 0:03:09.246 ******** 2026-04-16 09:03:47.028756 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:03:47.028775 | orchestrator | 2026-04-16 09:03:47.028794 | orchestrator | TASK [keystone : Disable log_bin_trust_function_creators function] ************* 2026-04-16 09:03:47.028814 | orchestrator | Thursday 16 April 2026 09:03:43 +0000 (0:00:12.795) 0:03:22.042 ******** 2026-04-16 09:03:47.028834 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:03:47.028856 | orchestrator | 2026-04-16 09:03:47.028877 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:03:47.028899 | orchestrator | testbed-node-0 : ok=25  changed=11  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 09:03:47.028920 | orchestrator | testbed-node-1 : ok=19  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-04-16 09:03:47.028940 | orchestrator | testbed-node-2 : ok=21  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-04-16 09:03:47.028962 | orchestrator | 2026-04-16 09:03:47.028984 | orchestrator | 2026-04-16 09:03:47.029007 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:03:47.029030 | orchestrator | Thursday 16 April 2026 09:03:46 +0000 (0:00:02.979) 0:03:25.021 ******** 2026-04-16 09:03:47.029051 | orchestrator | =============================================================================== 2026-04-16 09:03:47.029073 | orchestrator | keystone : Init keystone database upgrade ------------------------------ 63.83s 2026-04-16 09:03:47.029153 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 50.81s 2026-04-16 09:03:47.029173 | orchestrator | keystone : Finish keystone database upgrade ---------------------------- 12.80s 2026-04-16 09:03:47.029191 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.62s 2026-04-16 09:03:47.029208 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 11.77s 2026-04-16 09:03:47.029224 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.85s 2026-04-16 09:03:47.029240 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.15s 2026-04-16 09:03:47.029257 | orchestrator | service-check-containers : keystone | Check containers ------------------ 3.47s 2026-04-16 09:03:47.029275 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.36s 2026-04-16 09:03:47.029294 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.03s 2026-04-16 09:03:47.029313 | orchestrator | keystone : Disable log_bin_trust_function_creators function ------------- 2.98s 2026-04-16 09:03:47.029332 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.86s 2026-04-16 09:03:47.029351 | orchestrator | keystone : Enable log_bin_trust_function_creators function -------------- 2.10s 2026-04-16 09:03:47.029369 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.05s 2026-04-16 09:03:47.029383 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.82s 2026-04-16 09:03:47.029394 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.42s 2026-04-16 09:03:47.029405 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.16s 2026-04-16 09:03:47.029434 | orchestrator | keystone : Generate the required cron jobs for the node ----------------- 1.06s 2026-04-16 09:03:47.029446 | orchestrator | keystone : Check if Keystone domain-specific config is supplied --------- 1.04s 2026-04-16 09:03:47.029457 | orchestrator | keystone : include_tasks ------------------------------------------------ 1.02s 2026-04-16 09:03:47.143598 | orchestrator | + osism apply -a upgrade placement 2026-04-16 09:03:48.250450 | orchestrator | 2026-04-16 09:03:48 | INFO  | Prepare task for execution of placement. 2026-04-16 09:03:48.305034 | orchestrator | 2026-04-16 09:03:48 | INFO  | Task c545501e-9695-45da-9b7d-63fd4f9987a8 (placement) was prepared for execution. 2026-04-16 09:03:48.305221 | orchestrator | 2026-04-16 09:03:48 | INFO  | It takes a moment until task c545501e-9695-45da-9b7d-63fd4f9987a8 (placement) has been started and output is visible here. 2026-04-16 09:04:42.395510 | orchestrator | 2026-04-16 09:04:42.395659 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:04:42.395689 | orchestrator | 2026-04-16 09:04:42.395708 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:04:42.395727 | orchestrator | Thursday 16 April 2026 09:03:52 +0000 (0:00:01.732) 0:00:01.732 ******** 2026-04-16 09:04:42.395746 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:04:42.395767 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:04:42.395787 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:04:42.395807 | orchestrator | 2026-04-16 09:04:42.395827 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:04:42.395847 | orchestrator | Thursday 16 April 2026 09:03:54 +0000 (0:00:01.645) 0:00:03.377 ******** 2026-04-16 09:04:42.395864 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-04-16 09:04:42.395876 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-04-16 09:04:42.395887 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-04-16 09:04:42.395898 | orchestrator | 2026-04-16 09:04:42.395909 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-04-16 09:04:42.395920 | orchestrator | 2026-04-16 09:04:42.395930 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-16 09:04:42.395941 | orchestrator | Thursday 16 April 2026 09:03:56 +0000 (0:00:02.171) 0:00:05.548 ******** 2026-04-16 09:04:42.395953 | orchestrator | included: /ansible/roles/placement/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:04:42.395964 | orchestrator | 2026-04-16 09:04:42.395975 | orchestrator | TASK [service-ks-register : placement | Creating/deleting services] ************ 2026-04-16 09:04:42.395986 | orchestrator | Thursday 16 April 2026 09:03:59 +0000 (0:00:02.894) 0:00:08.442 ******** 2026-04-16 09:04:42.395997 | orchestrator | ok: [testbed-node-0] => (item=placement (placement)) 2026-04-16 09:04:42.396007 | orchestrator | 2026-04-16 09:04:42.396056 | orchestrator | TASK [service-ks-register : placement | Creating/deleting endpoints] *********** 2026-04-16 09:04:42.396095 | orchestrator | Thursday 16 April 2026 09:04:04 +0000 (0:00:04.959) 0:00:13.402 ******** 2026-04-16 09:04:42.396108 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-04-16 09:04:42.396123 | orchestrator | ok: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-04-16 09:04:42.396137 | orchestrator | 2026-04-16 09:04:42.396149 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-04-16 09:04:42.396160 | orchestrator | Thursday 16 April 2026 09:04:12 +0000 (0:00:08.068) 0:00:21.470 ******** 2026-04-16 09:04:42.396171 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 09:04:42.396182 | orchestrator | 2026-04-16 09:04:42.396193 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-04-16 09:04:42.396204 | orchestrator | Thursday 16 April 2026 09:04:16 +0000 (0:00:04.310) 0:00:25.781 ******** 2026-04-16 09:04:42.396215 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-04-16 09:04:42.396226 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 09:04:42.396237 | orchestrator | 2026-04-16 09:04:42.396248 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-04-16 09:04:42.396259 | orchestrator | Thursday 16 April 2026 09:04:23 +0000 (0:00:06.625) 0:00:32.407 ******** 2026-04-16 09:04:42.396298 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 09:04:42.396310 | orchestrator | 2026-04-16 09:04:42.396321 | orchestrator | TASK [service-ks-register : placement | Granting/revoking user roles] ********** 2026-04-16 09:04:42.396332 | orchestrator | Thursday 16 April 2026 09:04:27 +0000 (0:00:04.458) 0:00:36.866 ******** 2026-04-16 09:04:42.396343 | orchestrator | ok: [testbed-node-0] => (item=placement -> service -> admin) 2026-04-16 09:04:42.396354 | orchestrator | 2026-04-16 09:04:42.396365 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-16 09:04:42.396376 | orchestrator | Thursday 16 April 2026 09:04:33 +0000 (0:00:05.118) 0:00:41.984 ******** 2026-04-16 09:04:42.396387 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:04:42.396398 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:04:42.396409 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:04:42.396420 | orchestrator | 2026-04-16 09:04:42.396431 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-04-16 09:04:42.396441 | orchestrator | Thursday 16 April 2026 09:04:34 +0000 (0:00:01.648) 0:00:43.633 ******** 2026-04-16 09:04:42.396497 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:04:42.396516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:04:42.396530 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:04:42.396553 | orchestrator | 2026-04-16 09:04:42.396564 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-04-16 09:04:42.396575 | orchestrator | Thursday 16 April 2026 09:04:36 +0000 (0:00:02.142) 0:00:45.776 ******** 2026-04-16 09:04:42.396586 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:04:42.396597 | orchestrator | 2026-04-16 09:04:42.396608 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-04-16 09:04:42.396619 | orchestrator | Thursday 16 April 2026 09:04:37 +0000 (0:00:01.118) 0:00:46.895 ******** 2026-04-16 09:04:42.396630 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:04:42.396641 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:04:42.396652 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:04:42.396662 | orchestrator | 2026-04-16 09:04:42.396673 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-04-16 09:04:42.396684 | orchestrator | Thursday 16 April 2026 09:04:39 +0000 (0:00:01.324) 0:00:48.219 ******** 2026-04-16 09:04:42.396695 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:04:42.396706 | orchestrator | 2026-04-16 09:04:42.396723 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-04-16 09:04:42.396744 | orchestrator | Thursday 16 April 2026 09:04:41 +0000 (0:00:01.836) 0:00:50.055 ******** 2026-04-16 09:04:42.396783 | orchestrator | ok: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:04:45.669431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:04:45.669529 | orchestrator | ok: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:04:45.669562 | orchestrator | 2026-04-16 09:04:45.669573 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-04-16 09:04:45.669582 | orchestrator | Thursday 16 April 2026 09:04:43 +0000 (0:00:02.354) 0:00:52.410 ******** 2026-04-16 09:04:45.669591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:04:45.669600 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:04:45.669635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:04:45.669644 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:04:45.669652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:04:45.669665 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:04:45.669672 | orchestrator | 2026-04-16 09:04:45.669680 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-04-16 09:04:45.669688 | orchestrator | Thursday 16 April 2026 09:04:45 +0000 (0:00:01.727) 0:00:54.137 ******** 2026-04-16 09:04:45.669696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:04:45.669706 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:04:45.669727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:04:45.669747 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:04:45.669770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:00.584293 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:05:00.584433 | orchestrator | 2026-04-16 09:05:00.584462 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-04-16 09:05:00.584482 | orchestrator | Thursday 16 April 2026 09:04:46 +0000 (0:00:01.587) 0:00:55.725 ******** 2026-04-16 09:05:00.584498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:00.584515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:00.584546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:00.584560 | orchestrator | 2026-04-16 09:05:00.584571 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-04-16 09:05:00.584582 | orchestrator | Thursday 16 April 2026 09:04:49 +0000 (0:00:02.463) 0:00:58.188 ******** 2026-04-16 09:05:00.584615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:00.584650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:00.584663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:00.584675 | orchestrator | 2026-04-16 09:05:00.584692 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-04-16 09:05:00.584703 | orchestrator | Thursday 16 April 2026 09:04:52 +0000 (0:00:03.513) 0:01:01.702 ******** 2026-04-16 09:05:00.584714 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-16 09:05:00.584726 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:05:00.584738 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-16 09:05:00.584748 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:05:00.584760 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2)  2026-04-16 09:05:00.584770 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:05:00.584781 | orchestrator | 2026-04-16 09:05:00.584792 | orchestrator | TASK [Configure uWSGI for Placement] ******************************************* 2026-04-16 09:05:00.584814 | orchestrator | Thursday 16 April 2026 09:04:54 +0000 (0:00:01.507) 0:01:03.210 ******** 2026-04-16 09:05:00.584828 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:05:00.584842 | orchestrator | 2026-04-16 09:05:00.584855 | orchestrator | TASK [service-uwsgi-config : Copying over placement-api uWSGI config] ********** 2026-04-16 09:05:00.584867 | orchestrator | Thursday 16 April 2026 09:04:56 +0000 (0:00:01.791) 0:01:05.001 ******** 2026-04-16 09:05:00.584880 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:05:00.584894 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:05:00.584906 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:05:00.584919 | orchestrator | 2026-04-16 09:05:00.584937 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-04-16 09:05:00.584957 | orchestrator | Thursday 16 April 2026 09:04:59 +0000 (0:00:02.952) 0:01:07.954 ******** 2026-04-16 09:05:00.584974 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:05:00.584994 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:05:00.585044 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:05:00.585062 | orchestrator | 2026-04-16 09:05:00.585092 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-04-16 09:05:07.633172 | orchestrator | Thursday 16 April 2026 09:05:01 +0000 (0:00:02.524) 0:01:10.478 ******** 2026-04-16 09:05:07.633313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:07.633342 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:05:07.633361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:07.633377 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:05:07.633415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:07.633459 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:05:07.633477 | orchestrator | 2026-04-16 09:05:07.633492 | orchestrator | TASK [service-check-containers : placement | Check containers] ***************** 2026-04-16 09:05:07.633508 | orchestrator | Thursday 16 April 2026 09:05:03 +0000 (0:00:02.086) 0:01:12.564 ******** 2026-04-16 09:05:07.633548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:07.633566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:07.633584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:05:07.633613 | orchestrator | 2026-04-16 09:05:07.633637 | orchestrator | TASK [service-check-containers : placement | Notify handlers to restart containers] *** 2026-04-16 09:05:07.633654 | orchestrator | Thursday 16 April 2026 09:05:06 +0000 (0:00:02.423) 0:01:14.988 ******** 2026-04-16 09:05:07.633670 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:05:07.633684 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:05:07.633699 | orchestrator | } 2026-04-16 09:05:07.633713 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:05:07.633729 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:05:07.633744 | orchestrator | } 2026-04-16 09:05:07.633758 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:05:07.633774 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:05:07.633788 | orchestrator | } 2026-04-16 09:05:07.633802 | orchestrator | 2026-04-16 09:05:07.633817 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:05:07.633832 | orchestrator | Thursday 16 April 2026 09:05:07 +0000 (0:00:01.321) 0:01:16.310 ******** 2026-04-16 09:05:07.633862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:59.805625 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:05:59.805754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:59.805783 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:05:59.805805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:05:59.805857 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:05:59.805879 | orchestrator | 2026-04-16 09:05:59.805901 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-04-16 09:05:59.805921 | orchestrator | Thursday 16 April 2026 09:05:09 +0000 (0:00:02.002) 0:01:18.312 ******** 2026-04-16 09:05:59.806074 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:05:59.806096 | orchestrator | 2026-04-16 09:05:59.806108 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-04-16 09:05:59.806119 | orchestrator | Thursday 16 April 2026 09:05:12 +0000 (0:00:03.203) 0:01:21.516 ******** 2026-04-16 09:05:59.806130 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:05:59.806151 | orchestrator | 2026-04-16 09:05:59.806165 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-04-16 09:05:59.806178 | orchestrator | Thursday 16 April 2026 09:05:16 +0000 (0:00:03.519) 0:01:25.036 ******** 2026-04-16 09:05:59.806207 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:05:59.806220 | orchestrator | 2026-04-16 09:05:59.806233 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-16 09:05:59.806245 | orchestrator | Thursday 16 April 2026 09:05:32 +0000 (0:00:16.292) 0:01:41.329 ******** 2026-04-16 09:05:59.806258 | orchestrator | 2026-04-16 09:05:59.806271 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-16 09:05:59.806296 | orchestrator | Thursday 16 April 2026 09:05:32 +0000 (0:00:00.434) 0:01:41.764 ******** 2026-04-16 09:05:59.806308 | orchestrator | 2026-04-16 09:05:59.806321 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-04-16 09:05:59.806335 | orchestrator | Thursday 16 April 2026 09:05:33 +0000 (0:00:00.443) 0:01:42.208 ******** 2026-04-16 09:05:59.806348 | orchestrator | 2026-04-16 09:05:59.806360 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-04-16 09:05:59.806373 | orchestrator | Thursday 16 April 2026 09:05:34 +0000 (0:00:00.791) 0:01:43.000 ******** 2026-04-16 09:05:59.806385 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:05:59.806400 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:05:59.806413 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:05:59.806426 | orchestrator | 2026-04-16 09:05:59.806438 | orchestrator | TASK [placement : Perform Placement online data migration] ********************* 2026-04-16 09:05:59.806451 | orchestrator | Thursday 16 April 2026 09:05:46 +0000 (0:00:12.764) 0:01:55.764 ******** 2026-04-16 09:05:59.806464 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:05:59.806477 | orchestrator | 2026-04-16 09:05:59.806490 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:05:59.806504 | orchestrator | testbed-node-0 : ok=24  changed=9  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-04-16 09:05:59.806540 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 09:05:59.806552 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 09:05:59.806563 | orchestrator | 2026-04-16 09:05:59.806574 | orchestrator | 2026-04-16 09:05:59.806585 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:05:59.806596 | orchestrator | Thursday 16 April 2026 09:05:59 +0000 (0:00:12.663) 0:02:08.428 ******** 2026-04-16 09:05:59.806622 | orchestrator | =============================================================================== 2026-04-16 09:05:59.806641 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.29s 2026-04-16 09:05:59.806659 | orchestrator | placement : Restart placement-api container ---------------------------- 12.76s 2026-04-16 09:05:59.806677 | orchestrator | placement : Perform Placement online data migration -------------------- 12.66s 2026-04-16 09:05:59.806694 | orchestrator | service-ks-register : placement | Creating/deleting endpoints ----------- 8.07s 2026-04-16 09:05:59.806712 | orchestrator | service-ks-register : placement | Creating users ------------------------ 6.63s 2026-04-16 09:05:59.806731 | orchestrator | service-ks-register : placement | Granting/revoking user roles ---------- 5.12s 2026-04-16 09:05:59.806751 | orchestrator | service-ks-register : placement | Creating/deleting services ------------ 4.96s 2026-04-16 09:05:59.806770 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.46s 2026-04-16 09:05:59.806788 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.31s 2026-04-16 09:05:59.806803 | orchestrator | placement : Creating placement databases user and setting permissions --- 3.52s 2026-04-16 09:05:59.806815 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.51s 2026-04-16 09:05:59.806834 | orchestrator | placement : Creating placement databases -------------------------------- 3.20s 2026-04-16 09:05:59.806852 | orchestrator | service-uwsgi-config : Copying over placement-api uWSGI config ---------- 2.95s 2026-04-16 09:05:59.806871 | orchestrator | placement : include_tasks ----------------------------------------------- 2.90s 2026-04-16 09:05:59.806890 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.52s 2026-04-16 09:05:59.806907 | orchestrator | placement : Copying over config.json files for services ----------------- 2.46s 2026-04-16 09:05:59.806924 | orchestrator | service-check-containers : placement | Check containers ----------------- 2.42s 2026-04-16 09:05:59.806982 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.36s 2026-04-16 09:05:59.807003 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.17s 2026-04-16 09:05:59.807019 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.14s 2026-04-16 09:05:59.969491 | orchestrator | + osism apply -a upgrade neutron 2026-04-16 09:06:01.245035 | orchestrator | 2026-04-16 09:06:01 | INFO  | Prepare task for execution of neutron. 2026-04-16 09:06:01.307388 | orchestrator | 2026-04-16 09:06:01 | INFO  | Task bbbd1105-ef0f-4ee6-b6d0-3ddd77a0a1e2 (neutron) was prepared for execution. 2026-04-16 09:06:01.307511 | orchestrator | 2026-04-16 09:06:01 | INFO  | It takes a moment until task bbbd1105-ef0f-4ee6-b6d0-3ddd77a0a1e2 (neutron) has been started and output is visible here. 2026-04-16 09:06:36.118454 | orchestrator | 2026-04-16 09:06:36.118599 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:06:36.118628 | orchestrator | 2026-04-16 09:06:36.118651 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:06:36.118672 | orchestrator | Thursday 16 April 2026 09:06:06 +0000 (0:00:02.109) 0:00:02.109 ******** 2026-04-16 09:06:36.118691 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:06:36.118713 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:06:36.118732 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:06:36.118752 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:06:36.118771 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:06:36.118791 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:06:36.118811 | orchestrator | 2026-04-16 09:06:36.118832 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:06:36.118852 | orchestrator | Thursday 16 April 2026 09:06:09 +0000 (0:00:02.642) 0:00:04.751 ******** 2026-04-16 09:06:36.118873 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-04-16 09:06:36.118893 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-04-16 09:06:36.118937 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-04-16 09:06:36.118994 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-04-16 09:06:36.119016 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-04-16 09:06:36.119035 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-04-16 09:06:36.119055 | orchestrator | 2026-04-16 09:06:36.119075 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-04-16 09:06:36.119095 | orchestrator | 2026-04-16 09:06:36.119115 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 09:06:36.119134 | orchestrator | Thursday 16 April 2026 09:06:11 +0000 (0:00:02.196) 0:00:06.948 ******** 2026-04-16 09:06:36.119154 | orchestrator | included: /ansible/roles/neutron/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:06:36.119173 | orchestrator | 2026-04-16 09:06:36.119194 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-04-16 09:06:36.119213 | orchestrator | Thursday 16 April 2026 09:06:13 +0000 (0:00:02.318) 0:00:09.266 ******** 2026-04-16 09:06:36.119232 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:06:36.119253 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:06:36.119271 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:06:36.119290 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:06:36.119302 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:06:36.119313 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:06:36.119324 | orchestrator | 2026-04-16 09:06:36.119338 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-04-16 09:06:36.119355 | orchestrator | Thursday 16 April 2026 09:06:16 +0000 (0:00:03.087) 0:00:12.354 ******** 2026-04-16 09:06:36.119375 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:06:36.119393 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:06:36.119411 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:06:36.119430 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:06:36.119449 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:06:36.119468 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:06:36.119487 | orchestrator | 2026-04-16 09:06:36.119504 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-04-16 09:06:36.119522 | orchestrator | Thursday 16 April 2026 09:06:19 +0000 (0:00:02.248) 0:00:14.602 ******** 2026-04-16 09:06:36.119538 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 09:06:36.119556 | orchestrator |  "changed": false, 2026-04-16 09:06:36.119575 | orchestrator |  "msg": "All assertions passed" 2026-04-16 09:06:36.119594 | orchestrator | } 2026-04-16 09:06:36.119614 | orchestrator | ok: [testbed-node-1] => { 2026-04-16 09:06:36.119632 | orchestrator |  "changed": false, 2026-04-16 09:06:36.119735 | orchestrator |  "msg": "All assertions passed" 2026-04-16 09:06:36.119749 | orchestrator | } 2026-04-16 09:06:36.119760 | orchestrator | ok: [testbed-node-2] => { 2026-04-16 09:06:36.119771 | orchestrator |  "changed": false, 2026-04-16 09:06:36.119782 | orchestrator |  "msg": "All assertions passed" 2026-04-16 09:06:36.119793 | orchestrator | } 2026-04-16 09:06:36.119804 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 09:06:36.119815 | orchestrator |  "changed": false, 2026-04-16 09:06:36.119825 | orchestrator |  "msg": "All assertions passed" 2026-04-16 09:06:36.119836 | orchestrator | } 2026-04-16 09:06:36.119847 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 09:06:36.119858 | orchestrator |  "changed": false, 2026-04-16 09:06:36.119869 | orchestrator |  "msg": "All assertions passed" 2026-04-16 09:06:36.119880 | orchestrator | } 2026-04-16 09:06:36.119891 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 09:06:36.119901 | orchestrator |  "changed": false, 2026-04-16 09:06:36.119939 | orchestrator |  "msg": "All assertions passed" 2026-04-16 09:06:36.119955 | orchestrator | } 2026-04-16 09:06:36.119966 | orchestrator | 2026-04-16 09:06:36.119977 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-04-16 09:06:36.119988 | orchestrator | Thursday 16 April 2026 09:06:20 +0000 (0:00:01.873) 0:00:16.476 ******** 2026-04-16 09:06:36.120013 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:36.120025 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:36.120036 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:36.120046 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:36.120057 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:36.120068 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:36.120079 | orchestrator | 2026-04-16 09:06:36.120090 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 09:06:36.120100 | orchestrator | Thursday 16 April 2026 09:06:23 +0000 (0:00:02.171) 0:00:18.648 ******** 2026-04-16 09:06:36.120112 | orchestrator | included: /ansible/roles/neutron/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:06:36.120124 | orchestrator | 2026-04-16 09:06:36.120135 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-04-16 09:06:36.120146 | orchestrator | Thursday 16 April 2026 09:06:25 +0000 (0:00:02.219) 0:00:20.867 ******** 2026-04-16 09:06:36.120174 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:36.120185 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:36.120195 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:36.120206 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:36.120238 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:36.120249 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:36.120260 | orchestrator | 2026-04-16 09:06:36.120271 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-04-16 09:06:36.120282 | orchestrator | Thursday 16 April 2026 09:06:28 +0000 (0:00:03.341) 0:00:24.209 ******** 2026-04-16 09:06:36.120293 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:06:36.120304 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:06:36.120314 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:06:36.120325 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:06:36.120336 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:06:36.120347 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:06:36.120358 | orchestrator | 2026-04-16 09:06:36.120369 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-04-16 09:06:36.120379 | orchestrator | Thursday 16 April 2026 09:06:30 +0000 (0:00:02.035) 0:00:26.245 ******** 2026-04-16 09:06:36.120390 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:36.120401 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:36.120412 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:36.120423 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:36.120434 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:36.120445 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:36.120456 | orchestrator | 2026-04-16 09:06:36.120466 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-04-16 09:06:36.120477 | orchestrator | Thursday 16 April 2026 09:06:34 +0000 (0:00:03.321) 0:00:29.567 ******** 2026-04-16 09:06:36.120494 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:06:36.120512 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:06:36.120539 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:06:36.120563 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:06:47.096241 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:06:47.096377 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:06:47.096433 | orchestrator | 2026-04-16 09:06:47.096455 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-04-16 09:06:47.096475 | orchestrator | Thursday 16 April 2026 09:06:37 +0000 (0:00:03.669) 0:00:33.236 ******** 2026-04-16 09:06:47.096492 | orchestrator | [WARNING]: Skipped 2026-04-16 09:06:47.096509 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-04-16 09:06:47.096526 | orchestrator | due to this access issue: 2026-04-16 09:06:47.096544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-04-16 09:06:47.096560 | orchestrator | a directory 2026-04-16 09:06:47.096576 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:06:47.096593 | orchestrator | 2026-04-16 09:06:47.096608 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 09:06:47.096625 | orchestrator | Thursday 16 April 2026 09:06:39 +0000 (0:00:02.230) 0:00:35.467 ******** 2026-04-16 09:06:47.096642 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:06:47.096660 | orchestrator | 2026-04-16 09:06:47.096677 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-04-16 09:06:47.096694 | orchestrator | Thursday 16 April 2026 09:06:42 +0000 (0:00:02.266) 0:00:37.733 ******** 2026-04-16 09:06:47.096730 | orchestrator | ok: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:06:47.096792 | orchestrator | ok: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:06:47.096815 | orchestrator | ok: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:06:47.096846 | orchestrator | ok: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:06:47.096867 | orchestrator | ok: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:06:47.096892 | orchestrator | ok: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:06:47.096971 | orchestrator | 2026-04-16 09:06:47.096990 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-04-16 09:06:47.097007 | orchestrator | Thursday 16 April 2026 09:06:45 +0000 (0:00:03.647) 0:00:41.381 ******** 2026-04-16 09:06:47.097037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:50.422335 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:50.422453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:50.422473 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:50.422488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:50.422502 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:50.422534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:50.422556 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:50.422574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:50.422645 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:50.422692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:50.422714 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:50.422732 | orchestrator | 2026-04-16 09:06:50.422753 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-04-16 09:06:50.422774 | orchestrator | Thursday 16 April 2026 09:06:48 +0000 (0:00:02.983) 0:00:44.365 ******** 2026-04-16 09:06:50.422793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:50.422813 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:50.422838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:50.422850 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:50.422864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:50.422888 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:50.422942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:59.902214 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:59.902309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:59.902323 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:59.902336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:59.902347 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:59.902358 | orchestrator | 2026-04-16 09:06:59.902370 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-04-16 09:06:59.902398 | orchestrator | Thursday 16 April 2026 09:06:51 +0000 (0:00:02.931) 0:00:47.296 ******** 2026-04-16 09:06:59.902410 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:59.902420 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:59.902430 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:59.902440 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:59.902451 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:59.902462 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:59.902497 | orchestrator | 2026-04-16 09:06:59.902508 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-04-16 09:06:59.902515 | orchestrator | Thursday 16 April 2026 09:06:54 +0000 (0:00:02.804) 0:00:50.101 ******** 2026-04-16 09:06:59.902521 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:59.902527 | orchestrator | 2026-04-16 09:06:59.902533 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-04-16 09:06:59.902539 | orchestrator | Thursday 16 April 2026 09:06:55 +0000 (0:00:01.080) 0:00:51.181 ******** 2026-04-16 09:06:59.902546 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:59.902552 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:59.902558 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:59.902565 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:59.902571 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:59.902577 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:06:59.902583 | orchestrator | 2026-04-16 09:06:59.902589 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-04-16 09:06:59.902595 | orchestrator | Thursday 16 April 2026 09:06:57 +0000 (0:00:01.795) 0:00:52.977 ******** 2026-04-16 09:06:59.902604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:59.902612 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:06:59.902634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:59.902641 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:06:59.902653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:06:59.902665 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:06:59.902672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:59.902678 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:06:59.902685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:06:59.902692 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:06:59.902703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:09.700452 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:09.700572 | orchestrator | 2026-04-16 09:07:09.700587 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-04-16 09:07:09.700598 | orchestrator | Thursday 16 April 2026 09:07:00 +0000 (0:00:03.509) 0:00:56.487 ******** 2026-04-16 09:07:09.700611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:09.700658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:09.700683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:09.700695 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:07:09.700733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:07:09.700745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:07:09.700761 | orchestrator | 2026-04-16 09:07:09.700785 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-04-16 09:07:09.700795 | orchestrator | Thursday 16 April 2026 09:07:05 +0000 (0:00:04.469) 0:01:00.957 ******** 2026-04-16 09:07:09.700805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:09.700816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:09.700832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:13.160545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:07:13.160680 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:07:13.160699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:07:13.160712 | orchestrator | 2026-04-16 09:07:13.160726 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-04-16 09:07:13.160754 | orchestrator | Thursday 16 April 2026 09:07:11 +0000 (0:00:05.726) 0:01:06.684 ******** 2026-04-16 09:07:13.160769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:13.160783 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:13.160863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:13.160975 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:13.160998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:13.161010 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:13.161022 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:13.161033 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:13.161045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:13.161056 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:13.161069 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:13.161092 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:13.161105 | orchestrator | 2026-04-16 09:07:13.161128 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-04-16 09:07:38.159702 | orchestrator | Thursday 16 April 2026 09:07:14 +0000 (0:00:02.961) 0:01:09.646 ******** 2026-04-16 09:07:38.159842 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.159862 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.159906 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.159921 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:07:38.159946 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:07:38.159961 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:07:38.159975 | orchestrator | 2026-04-16 09:07:38.159991 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-04-16 09:07:38.160005 | orchestrator | Thursday 16 April 2026 09:07:17 +0000 (0:00:03.423) 0:01:13.070 ******** 2026-04-16 09:07:38.160039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:38.160057 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.160072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:38.160087 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.160102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:38.160117 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.160134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:38.160200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:38.160225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:07:38.160242 | orchestrator | 2026-04-16 09:07:38.160257 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-04-16 09:07:38.160273 | orchestrator | Thursday 16 April 2026 09:07:21 +0000 (0:00:04.437) 0:01:17.508 ******** 2026-04-16 09:07:38.160287 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:38.160303 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:38.160317 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:38.160332 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.160346 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.160361 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.160376 | orchestrator | 2026-04-16 09:07:38.160391 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-04-16 09:07:38.160405 | orchestrator | Thursday 16 April 2026 09:07:25 +0000 (0:00:03.221) 0:01:20.729 ******** 2026-04-16 09:07:38.160421 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:38.160446 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:38.160461 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:38.160476 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.160491 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.160505 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.160519 | orchestrator | 2026-04-16 09:07:38.160534 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-04-16 09:07:38.160549 | orchestrator | Thursday 16 April 2026 09:07:28 +0000 (0:00:03.188) 0:01:23.917 ******** 2026-04-16 09:07:38.160563 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:38.160577 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:38.160591 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:38.160605 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.160618 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.160632 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.160645 | orchestrator | 2026-04-16 09:07:38.160659 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-04-16 09:07:38.160673 | orchestrator | Thursday 16 April 2026 09:07:31 +0000 (0:00:03.150) 0:01:27.068 ******** 2026-04-16 09:07:38.160687 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:38.160762 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:38.160780 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.160794 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:38.160808 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.160822 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.160835 | orchestrator | 2026-04-16 09:07:38.160850 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-04-16 09:07:38.160863 | orchestrator | Thursday 16 April 2026 09:07:34 +0000 (0:00:02.699) 0:01:29.767 ******** 2026-04-16 09:07:38.160971 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:38.160986 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:38.161000 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:38.161072 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:38.161086 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:38.161101 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:38.161114 | orchestrator | 2026-04-16 09:07:38.161128 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-04-16 09:07:38.161142 | orchestrator | Thursday 16 April 2026 09:07:37 +0000 (0:00:02.877) 0:01:32.644 ******** 2026-04-16 09:07:38.161191 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 09:07:44.192763 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:44.192992 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 09:07:44.193027 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:44.193039 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 09:07:44.193049 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:44.193059 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 09:07:44.193068 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:44.193078 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 09:07:44.193088 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:44.193098 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-04-16 09:07:44.193107 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:44.193117 | orchestrator | 2026-04-16 09:07:44.193127 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-04-16 09:07:44.193137 | orchestrator | Thursday 16 April 2026 09:07:39 +0000 (0:00:02.865) 0:01:35.510 ******** 2026-04-16 09:07:44.193178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:44.193229 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:07:44.193243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:44.193254 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:07:44.193266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:44.193278 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:07:44.193312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:44.193326 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:44.193343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:44.193360 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:07:44.193372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:07:44.193383 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:07:44.193394 | orchestrator | 2026-04-16 09:07:44.193406 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-04-16 09:07:44.193417 | orchestrator | Thursday 16 April 2026 09:07:42 +0000 (0:00:02.765) 0:01:38.276 ******** 2026-04-16 09:07:44.193430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:07:44.193446 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:07:44.193476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:15.198672 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.198785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:15.198844 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.198901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:15.198909 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.198919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:15.198931 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.198942 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:15.198949 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.198955 | orchestrator | 2026-04-16 09:08:15.198962 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-04-16 09:08:15.198969 | orchestrator | Thursday 16 April 2026 09:07:45 +0000 (0:00:03.143) 0:01:41.420 ******** 2026-04-16 09:08:15.198983 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.198989 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.198995 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199015 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199022 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199027 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199033 | orchestrator | 2026-04-16 09:08:15.199039 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-04-16 09:08:15.199045 | orchestrator | Thursday 16 April 2026 09:07:49 +0000 (0:00:03.132) 0:01:44.552 ******** 2026-04-16 09:08:15.199051 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199056 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199062 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199069 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:08:15.199079 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:08:15.199089 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:08:15.199098 | orchestrator | 2026-04-16 09:08:15.199104 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-04-16 09:08:15.199115 | orchestrator | Thursday 16 April 2026 09:07:53 +0000 (0:00:04.581) 0:01:49.134 ******** 2026-04-16 09:08:15.199121 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199127 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199132 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199138 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199144 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199149 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199155 | orchestrator | 2026-04-16 09:08:15.199161 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-04-16 09:08:15.199167 | orchestrator | Thursday 16 April 2026 09:07:56 +0000 (0:00:02.943) 0:01:52.077 ******** 2026-04-16 09:08:15.199173 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199178 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199184 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199190 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199196 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199201 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199207 | orchestrator | 2026-04-16 09:08:15.199213 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-04-16 09:08:15.199219 | orchestrator | Thursday 16 April 2026 09:07:59 +0000 (0:00:02.886) 0:01:54.964 ******** 2026-04-16 09:08:15.199225 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199231 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199239 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199245 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199252 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199259 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199265 | orchestrator | 2026-04-16 09:08:15.199272 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-04-16 09:08:15.199279 | orchestrator | Thursday 16 April 2026 09:08:02 +0000 (0:00:02.820) 0:01:57.785 ******** 2026-04-16 09:08:15.199288 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199298 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199308 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199319 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199326 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199333 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199340 | orchestrator | 2026-04-16 09:08:15.199347 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-04-16 09:08:15.199354 | orchestrator | Thursday 16 April 2026 09:08:05 +0000 (0:00:02.758) 0:02:00.543 ******** 2026-04-16 09:08:15.199360 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199367 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199378 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199385 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199392 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199398 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199405 | orchestrator | 2026-04-16 09:08:15.199412 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-04-16 09:08:15.199418 | orchestrator | Thursday 16 April 2026 09:08:07 +0000 (0:00:02.725) 0:02:03.269 ******** 2026-04-16 09:08:15.199425 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199432 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199438 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199445 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199452 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199459 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199465 | orchestrator | 2026-04-16 09:08:15.199475 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-04-16 09:08:15.199485 | orchestrator | Thursday 16 April 2026 09:08:10 +0000 (0:00:02.718) 0:02:05.987 ******** 2026-04-16 09:08:15.199495 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199504 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199511 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199517 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:15.199525 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199532 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:15.199538 | orchestrator | 2026-04-16 09:08:15.199545 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-04-16 09:08:15.199552 | orchestrator | Thursday 16 April 2026 09:08:13 +0000 (0:00:02.887) 0:02:08.874 ******** 2026-04-16 09:08:15.199559 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 09:08:15.199567 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:15.199574 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 09:08:15.199581 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:15.199587 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 09:08:15.199594 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:15.199600 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 09:08:15.199606 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:15.199612 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 09:08:15.199622 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:21.306255 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-04-16 09:08:21.306399 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:21.306426 | orchestrator | 2026-04-16 09:08:21.306437 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-04-16 09:08:21.306447 | orchestrator | Thursday 16 April 2026 09:08:16 +0000 (0:00:02.828) 0:02:11.703 ******** 2026-04-16 09:08:21.306476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:21.306513 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:21.306524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:21.306534 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:21.306544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:21.306553 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:21.306594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:21.306607 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:08:21.306622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:21.306638 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:21.306648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:21.306657 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:08:21.306665 | orchestrator | 2026-04-16 09:08:21.306675 | orchestrator | TASK [service-check-containers : neutron | Check containers] ******************* 2026-04-16 09:08:21.306684 | orchestrator | Thursday 16 April 2026 09:08:19 +0000 (0:00:03.182) 0:02:14.886 ******** 2026-04-16 09:08:21.306693 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:08:21.306703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:08:21.306726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:08:26.249194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:08:26.249316 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:08:26.249354 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-04-16 09:08:26.249375 | orchestrator | 2026-04-16 09:08:26.249396 | orchestrator | TASK [service-check-containers : neutron | Notify handlers to restart containers] *** 2026-04-16 09:08:26.249415 | orchestrator | Thursday 16 April 2026 09:08:22 +0000 (0:00:03.316) 0:02:18.203 ******** 2026-04-16 09:08:26.249435 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:08:26.249455 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:08:26.249473 | orchestrator | } 2026-04-16 09:08:26.249493 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:08:26.249512 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:08:26.249529 | orchestrator | } 2026-04-16 09:08:26.249548 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:08:26.249566 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:08:26.249583 | orchestrator | } 2026-04-16 09:08:26.249601 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 09:08:26.249619 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:08:26.249639 | orchestrator | } 2026-04-16 09:08:26.249657 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 09:08:26.249675 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:08:26.249693 | orchestrator | } 2026-04-16 09:08:26.249710 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 09:08:26.249728 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:08:26.249784 | orchestrator | } 2026-04-16 09:08:26.249804 | orchestrator | 2026-04-16 09:08:26.249826 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:08:26.249878 | orchestrator | Thursday 16 April 2026 09:08:24 +0000 (0:00:01.811) 0:02:20.015 ******** 2026-04-16 09:08:26.250009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:26.250108 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:08:26.250132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:26.250153 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:08:26.250174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:08:26.250195 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:08:26.250212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:08:26.250238 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:08:26.250269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:11:22.623867 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:11:22.624015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-04-16 09:11:22.624056 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:11:22.624072 | orchestrator | 2026-04-16 09:11:22.624087 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 09:11:22.624103 | orchestrator | Thursday 16 April 2026 09:08:28 +0000 (0:00:03.597) 0:02:23.612 ******** 2026-04-16 09:11:22.624117 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:11:22.624131 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:11:22.624144 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:11:22.624158 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:11:22.624171 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:11:22.624186 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:11:22.624200 | orchestrator | 2026-04-16 09:11:22.624214 | orchestrator | TASK [neutron : Running Neutron database expand container] ********************* 2026-04-16 09:11:22.624229 | orchestrator | Thursday 16 April 2026 09:08:29 +0000 (0:00:01.792) 0:02:25.405 ******** 2026-04-16 09:11:22.624237 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:11:22.624245 | orchestrator | 2026-04-16 09:11:22.624253 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624261 | orchestrator | Thursday 16 April 2026 09:09:06 +0000 (0:00:36.402) 0:03:01.808 ******** 2026-04-16 09:11:22.624270 | orchestrator | 2026-04-16 09:11:22.624277 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624286 | orchestrator | Thursday 16 April 2026 09:09:06 +0000 (0:00:00.446) 0:03:02.255 ******** 2026-04-16 09:11:22.624294 | orchestrator | 2026-04-16 09:11:22.624302 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624310 | orchestrator | Thursday 16 April 2026 09:09:07 +0000 (0:00:00.453) 0:03:02.708 ******** 2026-04-16 09:11:22.624318 | orchestrator | 2026-04-16 09:11:22.624326 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624361 | orchestrator | Thursday 16 April 2026 09:09:07 +0000 (0:00:00.439) 0:03:03.148 ******** 2026-04-16 09:11:22.624369 | orchestrator | 2026-04-16 09:11:22.624377 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624385 | orchestrator | Thursday 16 April 2026 09:09:08 +0000 (0:00:00.428) 0:03:03.577 ******** 2026-04-16 09:11:22.624393 | orchestrator | 2026-04-16 09:11:22.624401 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624409 | orchestrator | Thursday 16 April 2026 09:09:08 +0000 (0:00:00.474) 0:03:04.051 ******** 2026-04-16 09:11:22.624417 | orchestrator | 2026-04-16 09:11:22.624425 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-04-16 09:11:22.624433 | orchestrator | Thursday 16 April 2026 09:09:09 +0000 (0:00:00.759) 0:03:04.810 ******** 2026-04-16 09:11:22.624440 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:11:22.624448 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:11:22.624456 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:11:22.624464 | orchestrator | 2026-04-16 09:11:22.624472 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-04-16 09:11:22.624480 | orchestrator | Thursday 16 April 2026 09:09:46 +0000 (0:00:37.422) 0:03:42.233 ******** 2026-04-16 09:11:22.624487 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:11:22.624495 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:11:22.624503 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:11:22.624511 | orchestrator | 2026-04-16 09:11:22.624519 | orchestrator | TASK [neutron : Checking neutron pending contract scripts] ********************* 2026-04-16 09:11:22.624527 | orchestrator | Thursday 16 April 2026 09:10:54 +0000 (0:01:07.362) 0:04:49.596 ******** 2026-04-16 09:11:22.624535 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:11:22.624543 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:11:22.624551 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:11:22.624559 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:11:22.624567 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:11:22.624574 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:11:22.624582 | orchestrator | 2026-04-16 09:11:22.624590 | orchestrator | TASK [neutron : Stopping all neutron-server for contract db] ******************* 2026-04-16 09:11:22.624598 | orchestrator | Thursday 16 April 2026 09:10:55 +0000 (0:00:01.764) 0:04:51.361 ******** 2026-04-16 09:11:22.624606 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:11:22.624614 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:11:22.624621 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:11:22.624629 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:11:22.624637 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:11:22.624645 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:11:22.624653 | orchestrator | 2026-04-16 09:11:22.624677 | orchestrator | TASK [neutron : Running Neutron database contract container] ******************* 2026-04-16 09:11:22.624691 | orchestrator | Thursday 16 April 2026 09:11:00 +0000 (0:00:04.514) 0:04:55.875 ******** 2026-04-16 09:11:22.624704 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:11:22.624715 | orchestrator | 2026-04-16 09:11:22.624750 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624784 | orchestrator | Thursday 16 April 2026 09:11:16 +0000 (0:00:16.420) 0:05:12.295 ******** 2026-04-16 09:11:22.624798 | orchestrator | 2026-04-16 09:11:22.624811 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624825 | orchestrator | Thursday 16 April 2026 09:11:17 +0000 (0:00:00.478) 0:05:12.773 ******** 2026-04-16 09:11:22.624838 | orchestrator | 2026-04-16 09:11:22.624851 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624864 | orchestrator | Thursday 16 April 2026 09:11:17 +0000 (0:00:00.468) 0:05:13.242 ******** 2026-04-16 09:11:22.624875 | orchestrator | 2026-04-16 09:11:22.624888 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624900 | orchestrator | Thursday 16 April 2026 09:11:18 +0000 (0:00:00.465) 0:05:13.708 ******** 2026-04-16 09:11:22.624926 | orchestrator | 2026-04-16 09:11:22.624939 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624953 | orchestrator | Thursday 16 April 2026 09:11:18 +0000 (0:00:00.425) 0:05:14.133 ******** 2026-04-16 09:11:22.624966 | orchestrator | 2026-04-16 09:11:22.624979 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-04-16 09:11:22.624992 | orchestrator | Thursday 16 April 2026 09:11:19 +0000 (0:00:00.450) 0:05:14.584 ******** 2026-04-16 09:11:22.625005 | orchestrator | 2026-04-16 09:11:22.625019 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-04-16 09:11:22.625032 | orchestrator | Thursday 16 April 2026 09:11:19 +0000 (0:00:00.769) 0:05:15.353 ******** 2026-04-16 09:11:22.625045 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:11:22.625058 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:11:22.625072 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:11:22.625085 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:11:22.625098 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:11:22.625112 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:11:22.625125 | orchestrator | 2026-04-16 09:11:22.625138 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:11:22.625152 | orchestrator | testbed-node-0 : ok=21  changed=8  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-16 09:11:22.625167 | orchestrator | testbed-node-1 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-16 09:11:22.625178 | orchestrator | testbed-node-2 : ok=18  changed=6  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2026-04-16 09:11:22.625191 | orchestrator | testbed-node-3 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-16 09:11:22.625202 | orchestrator | testbed-node-4 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-16 09:11:22.625215 | orchestrator | testbed-node-5 : ok=17  changed=6  unreachable=0 failed=0 skipped=34  rescued=0 ignored=0 2026-04-16 09:11:22.625229 | orchestrator | 2026-04-16 09:11:22.625242 | orchestrator | 2026-04-16 09:11:22.625256 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:11:22.625270 | orchestrator | Thursday 16 April 2026 09:11:22 +0000 (0:00:02.785) 0:05:18.139 ******** 2026-04-16 09:11:22.625282 | orchestrator | =============================================================================== 2026-04-16 09:11:22.625290 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 67.36s 2026-04-16 09:11:22.625298 | orchestrator | neutron : Restart neutron-server container ----------------------------- 37.42s 2026-04-16 09:11:22.625306 | orchestrator | neutron : Running Neutron database expand container -------------------- 36.40s 2026-04-16 09:11:22.625316 | orchestrator | neutron : Running Neutron database contract container ------------------ 16.42s 2026-04-16 09:11:22.625330 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.73s 2026-04-16 09:11:22.625343 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.58s 2026-04-16 09:11:22.625355 | orchestrator | neutron : Stopping all neutron-server for contract db ------------------- 4.51s 2026-04-16 09:11:22.625368 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.47s 2026-04-16 09:11:22.625382 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.44s 2026-04-16 09:11:22.625395 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.67s 2026-04-16 09:11:22.625409 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.65s 2026-04-16 09:11:22.625422 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.60s 2026-04-16 09:11:22.625447 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.51s 2026-04-16 09:11:22.625460 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.42s 2026-04-16 09:11:22.625474 | orchestrator | Load and persist kernel modules ----------------------------------------- 3.34s 2026-04-16 09:11:22.625485 | orchestrator | Setting sysctl values --------------------------------------------------- 3.32s 2026-04-16 09:11:22.625500 | orchestrator | service-check-containers : neutron | Check containers ------------------- 3.32s 2026-04-16 09:11:22.625509 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.22s 2026-04-16 09:11:22.625517 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.19s 2026-04-16 09:11:22.625534 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.18s 2026-04-16 09:11:23.073000 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-16 09:11:23.073120 | orchestrator | + osism apply -a reconfigure nova 2026-04-16 09:11:24.385309 | orchestrator | 2026-04-16 09:11:24 | INFO  | Prepare task for execution of nova. 2026-04-16 09:11:24.450324 | orchestrator | 2026-04-16 09:11:24 | INFO  | Task 96ee19b8-ea33-4282-924d-b1a87a5315ea (nova) was prepared for execution. 2026-04-16 09:11:24.450422 | orchestrator | 2026-04-16 09:11:24 | INFO  | It takes a moment until task 96ee19b8-ea33-4282-924d-b1a87a5315ea (nova) has been started and output is visible here. 2026-04-16 09:13:20.675076 | orchestrator | 2026-04-16 09:13:20.675211 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:13:20.675230 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:13:20.675244 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:13:20.675266 | orchestrator | 2026-04-16 09:13:20.675277 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-16 09:13:20.675288 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:13:20.675299 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:13:20.675335 | orchestrator | Thursday 16 April 2026 09:11:28 +0000 (0:00:01.140) 0:00:01.140 ******** 2026-04-16 09:13:20.675355 | orchestrator | changed: [testbed-manager] 2026-04-16 09:13:20.675372 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:13:20.675389 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:13:20.675406 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:13:20.675424 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:13:20.675441 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:13:20.675459 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:13:20.675478 | orchestrator | 2026-04-16 09:13:20.675498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:13:20.675517 | orchestrator | Thursday 16 April 2026 09:11:31 +0000 (0:00:02.651) 0:00:03.791 ******** 2026-04-16 09:13:20.675535 | orchestrator | changed: [testbed-manager] 2026-04-16 09:13:20.675554 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:13:20.675572 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:13:20.675592 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:13:20.675612 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:13:20.675631 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:13:20.675644 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:13:20.675654 | orchestrator | 2026-04-16 09:13:20.675701 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:13:20.675714 | orchestrator | Thursday 16 April 2026 09:11:32 +0000 (0:00:00.713) 0:00:04.505 ******** 2026-04-16 09:13:20.675726 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-16 09:13:20.675763 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-16 09:13:20.675775 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-16 09:13:20.675785 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-16 09:13:20.675796 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-16 09:13:20.675807 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-16 09:13:20.675817 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-16 09:13:20.675828 | orchestrator | 2026-04-16 09:13:20.675839 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-16 09:13:20.675850 | orchestrator | 2026-04-16 09:13:20.675860 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-16 09:13:20.675871 | orchestrator | Thursday 16 April 2026 09:11:33 +0000 (0:00:01.091) 0:00:05.596 ******** 2026-04-16 09:13:20.675882 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:13:20.675893 | orchestrator | 2026-04-16 09:13:20.675903 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-04-16 09:13:20.675914 | orchestrator | Thursday 16 April 2026 09:11:34 +0000 (0:00:00.960) 0:00:06.557 ******** 2026-04-16 09:13:20.675925 | orchestrator | ok: [testbed-node-0] => (item=nova_cell0) 2026-04-16 09:13:20.675936 | orchestrator | ok: [testbed-node-0] => (item=nova_api) 2026-04-16 09:13:20.675947 | orchestrator | 2026-04-16 09:13:20.675957 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-04-16 09:13:20.675968 | orchestrator | Thursday 16 April 2026 09:11:38 +0000 (0:00:04.421) 0:00:10.979 ******** 2026-04-16 09:13:20.675979 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 09:13:20.675989 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 09:13:20.676000 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676011 | orchestrator | 2026-04-16 09:13:20.676022 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-16 09:13:20.676033 | orchestrator | Thursday 16 April 2026 09:11:43 +0000 (0:00:04.468) 0:00:15.447 ******** 2026-04-16 09:13:20.676044 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676054 | orchestrator | 2026-04-16 09:13:20.676065 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-16 09:13:20.676075 | orchestrator | Thursday 16 April 2026 09:11:43 +0000 (0:00:00.657) 0:00:16.104 ******** 2026-04-16 09:13:20.676086 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676096 | orchestrator | 2026-04-16 09:13:20.676121 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-16 09:13:20.676133 | orchestrator | Thursday 16 April 2026 09:11:44 +0000 (0:00:01.082) 0:00:17.187 ******** 2026-04-16 09:13:20.676143 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:13:20.676154 | orchestrator | 2026-04-16 09:13:20.676165 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 09:13:20.676176 | orchestrator | Thursday 16 April 2026 09:11:47 +0000 (0:00:02.763) 0:00:19.951 ******** 2026-04-16 09:13:20.676186 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:13:20.676197 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.676207 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.676218 | orchestrator | 2026-04-16 09:13:20.676228 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-16 09:13:20.676239 | orchestrator | Thursday 16 April 2026 09:11:48 +0000 (0:00:00.739) 0:00:20.690 ******** 2026-04-16 09:13:20.676250 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676260 | orchestrator | 2026-04-16 09:13:20.676271 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-04-16 09:13:20.676304 | orchestrator | Thursday 16 April 2026 09:12:22 +0000 (0:00:34.148) 0:00:54.839 ******** 2026-04-16 09:13:20.676316 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676326 | orchestrator | 2026-04-16 09:13:20.676337 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-16 09:13:20.676348 | orchestrator | Thursday 16 April 2026 09:12:38 +0000 (0:00:15.691) 0:01:10.530 ******** 2026-04-16 09:13:20.676367 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676378 | orchestrator | 2026-04-16 09:13:20.676388 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-16 09:13:20.676399 | orchestrator | Thursday 16 April 2026 09:12:53 +0000 (0:00:14.737) 0:01:25.268 ******** 2026-04-16 09:13:20.676410 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676421 | orchestrator | 2026-04-16 09:13:20.676431 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-04-16 09:13:20.676442 | orchestrator | Thursday 16 April 2026 09:12:54 +0000 (0:00:01.163) 0:01:26.432 ******** 2026-04-16 09:13:20.676453 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:13:20.676471 | orchestrator | 2026-04-16 09:13:20.676489 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 09:13:20.676507 | orchestrator | Thursday 16 April 2026 09:12:54 +0000 (0:00:00.614) 0:01:27.046 ******** 2026-04-16 09:13:20.676524 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:13:20.676542 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.676560 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.676579 | orchestrator | 2026-04-16 09:13:20.676599 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-16 09:13:20.676618 | orchestrator | Thursday 16 April 2026 09:12:55 +0000 (0:00:00.541) 0:01:27.588 ******** 2026-04-16 09:13:20.676635 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:13:20.676655 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.676699 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.676715 | orchestrator | 2026-04-16 09:13:20.676726 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-16 09:13:20.676736 | orchestrator | 2026-04-16 09:13:20.676747 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-16 09:13:20.676758 | orchestrator | Thursday 16 April 2026 09:12:56 +0000 (0:00:00.878) 0:01:28.466 ******** 2026-04-16 09:13:20.676769 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:13:20.676779 | orchestrator | 2026-04-16 09:13:20.676790 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-04-16 09:13:20.676801 | orchestrator | Thursday 16 April 2026 09:12:57 +0000 (0:00:00.947) 0:01:29.414 ******** 2026-04-16 09:13:20.676811 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.676822 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.676833 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676843 | orchestrator | 2026-04-16 09:13:20.676854 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-04-16 09:13:20.676864 | orchestrator | Thursday 16 April 2026 09:12:59 +0000 (0:00:02.123) 0:01:31.538 ******** 2026-04-16 09:13:20.676875 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.676886 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.676897 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.676907 | orchestrator | 2026-04-16 09:13:20.676918 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-16 09:13:20.676928 | orchestrator | Thursday 16 April 2026 09:13:02 +0000 (0:00:02.687) 0:01:34.225 ******** 2026-04-16 09:13:20.676939 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-16 09:13:20.676950 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.676960 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-16 09:13:20.676971 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.676982 | orchestrator | ok: [testbed-node-0] => (item=openstack) 2026-04-16 09:13:20.676992 | orchestrator | 2026-04-16 09:13:20.677003 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-16 09:13:20.677013 | orchestrator | Thursday 16 April 2026 09:13:06 +0000 (0:00:04.227) 0:01:38.453 ******** 2026-04-16 09:13:20.677024 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 09:13:20.677035 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.677055 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 09:13:20.677065 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.677076 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-04-16 09:13:20.677087 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-04-16 09:13:20.677098 | orchestrator | 2026-04-16 09:13:20.677109 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-04-16 09:13:20.677119 | orchestrator | Thursday 16 April 2026 09:13:18 +0000 (0:00:12.418) 0:01:50.872 ******** 2026-04-16 09:13:20.677130 | orchestrator | skipping: [testbed-node-0] => (item=openstack)  2026-04-16 09:13:20.677141 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:13:20.677152 | orchestrator | skipping: [testbed-node-1] => (item=openstack)  2026-04-16 09:13:20.677162 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.677180 | orchestrator | skipping: [testbed-node-2] => (item=openstack)  2026-04-16 09:13:20.677192 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.677202 | orchestrator | 2026-04-16 09:13:20.677213 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-04-16 09:13:20.677224 | orchestrator | Thursday 16 April 2026 09:13:19 +0000 (0:00:00.511) 0:01:51.383 ******** 2026-04-16 09:13:20.677235 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-04-16 09:13:20.677245 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:13:20.677256 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-04-16 09:13:20.677267 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.677278 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-04-16 09:13:20.677288 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:13:20.677299 | orchestrator | 2026-04-16 09:13:20.677309 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-16 09:13:20.677320 | orchestrator | Thursday 16 April 2026 09:13:20 +0000 (0:00:01.028) 0:01:52.411 ******** 2026-04-16 09:13:20.677331 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:13:20.677342 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:13:20.677363 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048219 | orchestrator | 2026-04-16 09:14:36.048343 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-16 09:14:36.048361 | orchestrator | Thursday 16 April 2026 09:13:20 +0000 (0:00:00.547) 0:01:52.959 ******** 2026-04-16 09:14:36.048374 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048386 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048397 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:14:36.048410 | orchestrator | 2026-04-16 09:14:36.048421 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-16 09:14:36.048432 | orchestrator | Thursday 16 April 2026 09:13:21 +0000 (0:00:00.891) 0:01:53.851 ******** 2026-04-16 09:14:36.048443 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048454 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048465 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:14:36.048476 | orchestrator | 2026-04-16 09:14:36.048488 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-16 09:14:36.048499 | orchestrator | Thursday 16 April 2026 09:13:24 +0000 (0:00:02.555) 0:01:56.406 ******** 2026-04-16 09:14:36.048510 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048521 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048531 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:14:36.048542 | orchestrator | 2026-04-16 09:14:36.048553 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-16 09:14:36.048564 | orchestrator | Thursday 16 April 2026 09:13:36 +0000 (0:00:11.965) 0:02:08.372 ******** 2026-04-16 09:14:36.048575 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048586 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048597 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:14:36.048608 | orchestrator | 2026-04-16 09:14:36.048618 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-16 09:14:36.048738 | orchestrator | Thursday 16 April 2026 09:13:48 +0000 (0:00:12.296) 0:02:20.668 ******** 2026-04-16 09:14:36.048762 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:14:36.048777 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048790 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048802 | orchestrator | 2026-04-16 09:14:36.048815 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-04-16 09:14:36.048828 | orchestrator | Thursday 16 April 2026 09:13:49 +0000 (0:00:01.130) 0:02:21.799 ******** 2026-04-16 09:14:36.048841 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:36.048853 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048866 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048879 | orchestrator | 2026-04-16 09:14:36.048891 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-04-16 09:14:36.048904 | orchestrator | Thursday 16 April 2026 09:13:50 +0000 (0:00:00.770) 0:02:22.569 ******** 2026-04-16 09:14:36.048917 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.048929 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.048941 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:14:36.048954 | orchestrator | 2026-04-16 09:14:36.048966 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-16 09:14:36.048979 | orchestrator | Thursday 16 April 2026 09:14:03 +0000 (0:00:13.230) 0:02:35.800 ******** 2026-04-16 09:14:36.048991 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:36.049004 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:36.049016 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:36.049029 | orchestrator | 2026-04-16 09:14:36.049041 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-16 09:14:36.049055 | orchestrator | 2026-04-16 09:14:36.049066 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 09:14:36.049077 | orchestrator | Thursday 16 April 2026 09:14:04 +0000 (0:00:00.694) 0:02:36.495 ******** 2026-04-16 09:14:36.049088 | orchestrator | included: /ansible/roles/nova/tasks/reconfigure.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:14:36.049149 | orchestrator | 2026-04-16 09:14:36.049161 | orchestrator | TASK [service-ks-register : nova | Creating/deleting services] ***************** 2026-04-16 09:14:36.049172 | orchestrator | Thursday 16 April 2026 09:14:05 +0000 (0:00:00.972) 0:02:37.467 ******** 2026-04-16 09:14:36.049183 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-04-16 09:14:36.049194 | orchestrator | ok: [testbed-node-0] => (item=nova (compute)) 2026-04-16 09:14:36.049204 | orchestrator | 2026-04-16 09:14:36.049216 | orchestrator | TASK [service-ks-register : nova | Creating/deleting endpoints] **************** 2026-04-16 09:14:36.049227 | orchestrator | Thursday 16 April 2026 09:14:08 +0000 (0:00:03.408) 0:02:40.875 ******** 2026-04-16 09:14:36.049238 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-04-16 09:14:36.049251 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-04-16 09:14:36.049278 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-04-16 09:14:36.049289 | orchestrator | ok: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-04-16 09:14:36.049300 | orchestrator | 2026-04-16 09:14:36.049311 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-04-16 09:14:36.049322 | orchestrator | Thursday 16 April 2026 09:14:15 +0000 (0:00:06.602) 0:02:47.478 ******** 2026-04-16 09:14:36.049333 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-04-16 09:14:36.049343 | orchestrator | 2026-04-16 09:14:36.049354 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-04-16 09:14:36.049365 | orchestrator | Thursday 16 April 2026 09:14:18 +0000 (0:00:03.390) 0:02:50.868 ******** 2026-04-16 09:14:36.049376 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-04-16 09:14:36.049397 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-04-16 09:14:36.049408 | orchestrator | 2026-04-16 09:14:36.049419 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-04-16 09:14:36.049451 | orchestrator | Thursday 16 April 2026 09:14:23 +0000 (0:00:04.904) 0:02:55.773 ******** 2026-04-16 09:14:36.049463 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-04-16 09:14:36.049474 | orchestrator | 2026-04-16 09:14:36.049485 | orchestrator | TASK [service-ks-register : nova | Granting/revoking user roles] *************** 2026-04-16 09:14:36.049496 | orchestrator | Thursday 16 April 2026 09:14:26 +0000 (0:00:03.352) 0:02:59.125 ******** 2026-04-16 09:14:36.049507 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> admin) 2026-04-16 09:14:36.049517 | orchestrator | ok: [testbed-node-0] => (item=nova -> service -> service) 2026-04-16 09:14:36.049528 | orchestrator | 2026-04-16 09:14:36.049539 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-16 09:14:36.049550 | orchestrator | Thursday 16 April 2026 09:14:34 +0000 (0:00:07.519) 0:03:06.644 ******** 2026-04-16 09:14:36.049566 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:36.049584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:36.049602 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:36.049655 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:41.224292 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:41.224398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:41.224422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:41.224447 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:41.224454 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:41.224460 | orchestrator | 2026-04-16 09:14:41.224467 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-16 09:14:41.224473 | orchestrator | Thursday 16 April 2026 09:14:36 +0000 (0:00:02.424) 0:03:09.068 ******** 2026-04-16 09:14:41.224492 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:41.224507 | orchestrator | 2026-04-16 09:14:41.224513 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-16 09:14:41.224518 | orchestrator | Thursday 16 April 2026 09:14:36 +0000 (0:00:00.114) 0:03:09.183 ******** 2026-04-16 09:14:41.224524 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:41.224529 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:41.224534 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:41.224540 | orchestrator | 2026-04-16 09:14:41.224545 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-16 09:14:41.224551 | orchestrator | Thursday 16 April 2026 09:14:37 +0000 (0:00:00.308) 0:03:09.492 ******** 2026-04-16 09:14:41.224556 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:14:41.224561 | orchestrator | 2026-04-16 09:14:41.224566 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-16 09:14:41.224572 | orchestrator | Thursday 16 April 2026 09:14:38 +0000 (0:00:01.077) 0:03:10.570 ******** 2026-04-16 09:14:41.224577 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:41.224582 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:41.224587 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:41.224592 | orchestrator | 2026-04-16 09:14:41.224597 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 09:14:41.224602 | orchestrator | Thursday 16 April 2026 09:14:38 +0000 (0:00:00.318) 0:03:10.888 ******** 2026-04-16 09:14:41.224608 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:14:41.224614 | orchestrator | 2026-04-16 09:14:41.224619 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-16 09:14:41.224649 | orchestrator | Thursday 16 April 2026 09:14:39 +0000 (0:00:01.114) 0:03:12.003 ******** 2026-04-16 09:14:41.224666 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:41.224682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:41.224694 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:43.712185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:43.712280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:43.712327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:43.712339 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:43.712365 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:43.712374 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:43.712383 | orchestrator | 2026-04-16 09:14:43.712393 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-16 09:14:43.712408 | orchestrator | Thursday 16 April 2026 09:14:42 +0000 (0:00:03.172) 0:03:15.176 ******** 2026-04-16 09:14:43.712419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:43.712433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:43.712442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:43.712452 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:43.712520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:44.807619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:44.807796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:44.807816 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:44.807832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:44.807847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:44.807963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:44.807990 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:44.808008 | orchestrator | 2026-04-16 09:14:44.808026 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-16 09:14:44.808045 | orchestrator | Thursday 16 April 2026 09:14:44 +0000 (0:00:01.122) 0:03:16.298 ******** 2026-04-16 09:14:44.808073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:44.808092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:44.808112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:44.808130 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:44.808163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:47.460368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:47.460451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:47.460460 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:14:47.460467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:47.460472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:47.460504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:47.460510 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:14:47.460514 | orchestrator | 2026-04-16 09:14:47.460519 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-16 09:14:47.460525 | orchestrator | Thursday 16 April 2026 09:14:45 +0000 (0:00:01.142) 0:03:17.440 ******** 2026-04-16 09:14:47.460546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:47.460552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:47.460557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:47.460578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:54.613158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:54.613286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:54.613326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:54.613341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:54.613353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:54.613364 | orchestrator | 2026-04-16 09:14:54.613377 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-16 09:14:54.613409 | orchestrator | Thursday 16 April 2026 09:14:48 +0000 (0:00:03.597) 0:03:21.037 ******** 2026-04-16 09:14:54.613429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:54.613443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:54.613492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:54.613521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:58.214691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:58.214839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:14:58.215783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:58.215822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:58.215844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:14:58.215865 | orchestrator | 2026-04-16 09:14:58.215895 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-16 09:14:58.215931 | orchestrator | Thursday 16 April 2026 09:14:57 +0000 (0:00:08.790) 0:03:29.827 ******** 2026-04-16 09:14:58.215944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:58.215958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:58.215985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:14:58.215997 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:14:58.216010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:14:58.216037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:15:09.281228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:15:09.281432 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:15:09.281466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:15:09.281482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:15:09.281509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:15:09.281522 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:15:09.281534 | orchestrator | 2026-04-16 09:15:09.281546 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-16 09:15:09.281558 | orchestrator | Thursday 16 April 2026 09:14:58 +0000 (0:00:00.766) 0:03:30.594 ******** 2026-04-16 09:15:09.281569 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:15:09.281579 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:15:09.281590 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:15:09.281601 | orchestrator | 2026-04-16 09:15:09.281642 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-16 09:15:09.281656 | orchestrator | Thursday 16 April 2026 09:14:59 +0000 (0:00:00.706) 0:03:31.300 ******** 2026-04-16 09:15:09.281731 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:15:09.281754 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:15:09.281783 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:15:09.281802 | orchestrator | 2026-04-16 09:15:09.281820 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-16 09:15:09.281865 | orchestrator | Thursday 16 April 2026 09:15:00 +0000 (0:00:00.942) 0:03:32.242 ******** 2026-04-16 09:15:09.281887 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-16 09:15:09.281907 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-16 09:15:09.281927 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:15:09.281947 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-16 09:15:09.281966 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-16 09:15:09.281985 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:15:09.281998 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-16 09:15:09.282011 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-16 09:15:09.282088 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:15:09.282101 | orchestrator | 2026-04-16 09:15:09.282115 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-16 09:15:09.282126 | orchestrator | Thursday 16 April 2026 09:15:00 +0000 (0:00:00.541) 0:03:32.784 ******** 2026-04-16 09:15:09.282138 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-16 09:15:09.282152 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-16 09:15:09.282163 | orchestrator | 2026-04-16 09:15:09.282174 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-16 09:15:09.282185 | orchestrator | Thursday 16 April 2026 09:15:02 +0000 (0:00:01.825) 0:03:34.610 ******** 2026-04-16 09:15:09.282196 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:15:09.282207 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:15:09.282218 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:15:09.282228 | orchestrator | 2026-04-16 09:15:09.282239 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-16 09:15:09.282250 | orchestrator | Thursday 16 April 2026 09:15:05 +0000 (0:00:02.617) 0:03:37.227 ******** 2026-04-16 09:15:09.282261 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:15:09.282272 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:15:09.282283 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:15:09.282294 | orchestrator | 2026-04-16 09:15:09.282305 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-16 09:15:09.282315 | orchestrator | Thursday 16 April 2026 09:15:07 +0000 (0:00:02.502) 0:03:39.729 ******** 2026-04-16 09:15:09.282328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:15:09.282373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:15:09.282413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:15:11.600743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:15:11.600833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:15:11.600880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:15:11.600891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:15:11.600962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:15:11.600974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:15:11.600982 | orchestrator | 2026-04-16 09:15:11.600992 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-16 09:15:11.601001 | orchestrator | Thursday 16 April 2026 09:15:10 +0000 (0:00:03.310) 0:03:43.040 ******** 2026-04-16 09:15:11.601010 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:15:11.601019 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:15:11.601028 | orchestrator | } 2026-04-16 09:15:11.601036 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:15:11.601044 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:15:11.601051 | orchestrator | } 2026-04-16 09:15:11.601059 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:15:11.601074 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:15:11.601082 | orchestrator | } 2026-04-16 09:15:11.601089 | orchestrator | 2026-04-16 09:15:11.601098 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:15:11.601106 | orchestrator | Thursday 16 April 2026 09:15:11 +0000 (0:00:00.339) 0:03:43.379 ******** 2026-04-16 09:15:11.601119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:15:11.601129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:15:11.601144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:16:33.395863 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:16:33.396038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:16:33.396107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:16:33.396124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:16:33.396137 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:16:33.396149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:16:33.396183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:16:33.396206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:16:33.396218 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:16:33.396230 | orchestrator | 2026-04-16 09:16:33.396242 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 09:16:33.396255 | orchestrator | Thursday 16 April 2026 09:15:12 +0000 (0:00:01.280) 0:03:44.660 ******** 2026-04-16 09:16:33.396266 | orchestrator | 2026-04-16 09:16:33.396278 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 09:16:33.396289 | orchestrator | Thursday 16 April 2026 09:15:12 +0000 (0:00:00.316) 0:03:44.976 ******** 2026-04-16 09:16:33.396299 | orchestrator | 2026-04-16 09:16:33.396311 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 09:16:33.396326 | orchestrator | Thursday 16 April 2026 09:15:12 +0000 (0:00:00.145) 0:03:45.121 ******** 2026-04-16 09:16:33.396338 | orchestrator | 2026-04-16 09:16:33.396349 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-04-16 09:16:33.396360 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 09:16:33.396372 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 09:16:33.396394 | orchestrator | Thursday 16 April 2026 09:15:13 +0000 (0:00:00.146) 0:03:45.268 ******** 2026-04-16 09:16:33.396405 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:16:33.396416 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:16:33.396440 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:16:33.396452 | orchestrator | 2026-04-16 09:16:33.396463 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-04-16 09:16:33.396474 | orchestrator | Thursday 16 April 2026 09:15:40 +0000 (0:00:27.421) 0:04:12.689 ******** 2026-04-16 09:16:33.396485 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:16:33.396496 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:16:33.396507 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:16:33.396518 | orchestrator | 2026-04-16 09:16:33.396529 | orchestrator | RUNNING HANDLER [nova : Restart nova-metadata container] *********************** 2026-04-16 09:16:33.396540 | orchestrator | Thursday 16 April 2026 09:15:53 +0000 (0:00:12.894) 0:04:25.584 ******** 2026-04-16 09:16:33.396551 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:16:33.396562 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:16:33.396573 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:16:33.396623 | orchestrator | 2026-04-16 09:16:33.396645 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-16 09:16:33.396663 | orchestrator | 2026-04-16 09:16:33.396682 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:16:33.396702 | orchestrator | Thursday 16 April 2026 09:15:58 +0000 (0:00:05.323) 0:04:30.908 ******** 2026-04-16 09:16:33.396720 | orchestrator | included: /ansible/roles/nova-cell/tasks/reconfigure.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:16:33.396741 | orchestrator | 2026-04-16 09:16:33.396760 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:16:33.396779 | orchestrator | Thursday 16 April 2026 09:16:00 +0000 (0:00:01.643) 0:04:32.551 ******** 2026-04-16 09:16:33.396809 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:16:33.396829 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:16:33.396849 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:16:33.396868 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:16:33.396888 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:16:33.396907 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:16:33.396927 | orchestrator | 2026-04-16 09:16:33.396947 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-16 09:16:33.396967 | orchestrator | Thursday 16 April 2026 09:16:01 +0000 (0:00:01.100) 0:04:33.652 ******** 2026-04-16 09:16:33.396999 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:17:06.416407 | orchestrator | 2026-04-16 09:17:06.416489 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-16 09:17:06.416496 | orchestrator | Thursday 16 April 2026 09:16:33 +0000 (0:00:32.032) 0:05:05.685 ******** 2026-04-16 09:17:06.416501 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.416506 | orchestrator | 2026-04-16 09:17:06.416510 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-16 09:17:06.416515 | orchestrator | Thursday 16 April 2026 09:16:34 +0000 (0:00:01.394) 0:05:07.079 ******** 2026-04-16 09:17:06.416519 | orchestrator | included: service-image-info for testbed-node-3 2026-04-16 09:17:06.416523 | orchestrator | 2026-04-16 09:17:06.416527 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-16 09:17:06.416531 | orchestrator | Thursday 16 April 2026 09:16:35 +0000 (0:00:01.132) 0:05:08.212 ******** 2026-04-16 09:17:06.416535 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.416538 | orchestrator | 2026-04-16 09:17:06.416542 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-16 09:17:06.416546 | orchestrator | Thursday 16 April 2026 09:16:39 +0000 (0:00:03.264) 0:05:11.477 ******** 2026-04-16 09:17:06.416550 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.416554 | orchestrator | 2026-04-16 09:17:06.416558 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-16 09:17:06.416561 | orchestrator | Thursday 16 April 2026 09:16:41 +0000 (0:00:01.972) 0:05:13.450 ******** 2026-04-16 09:17:06.416565 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:06.416623 | orchestrator | 2026-04-16 09:17:06.416628 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-16 09:17:06.416632 | orchestrator | Thursday 16 April 2026 09:16:43 +0000 (0:00:01.893) 0:05:15.343 ******** 2026-04-16 09:17:06.416636 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:06.416640 | orchestrator | 2026-04-16 09:17:06.416644 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-16 09:17:06.416648 | orchestrator | Thursday 16 April 2026 09:16:45 +0000 (0:00:02.138) 0:05:17.482 ******** 2026-04-16 09:17:06.416651 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.416655 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.416659 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.416663 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.416667 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:06.416671 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:06.416675 | orchestrator | 2026-04-16 09:17:06.416679 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-16 09:17:06.416683 | orchestrator | Thursday 16 April 2026 09:16:48 +0000 (0:00:03.554) 0:05:21.036 ******** 2026-04-16 09:17:06.416687 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.416691 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.416695 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.416698 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.416702 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:06.416706 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:06.416710 | orchestrator | 2026-04-16 09:17:06.416726 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-16 09:17:06.416744 | orchestrator | Thursday 16 April 2026 09:16:52 +0000 (0:00:03.888) 0:05:24.925 ******** 2026-04-16 09:17:06.416748 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.416752 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.416755 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.416759 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 09:17:06.416763 | orchestrator |  "changed": false, 2026-04-16 09:17:06.416767 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-16 09:17:06.416771 | orchestrator | } 2026-04-16 09:17:06.416776 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 09:17:06.416780 | orchestrator |  "changed": false, 2026-04-16 09:17:06.416784 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-16 09:17:06.416787 | orchestrator | } 2026-04-16 09:17:06.416791 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 09:17:06.416795 | orchestrator |  "changed": false, 2026-04-16 09:17:06.416799 | orchestrator |  "msg": "Libvirt version check successful: target 10.0.0 >= current 10.0.0.\n" 2026-04-16 09:17:06.416803 | orchestrator | } 2026-04-16 09:17:06.416806 | orchestrator | 2026-04-16 09:17:06.416810 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-16 09:17:06.416814 | orchestrator | Thursday 16 April 2026 09:16:57 +0000 (0:00:05.024) 0:05:29.949 ******** 2026-04-16 09:17:06.416818 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.416822 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.416826 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.416830 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:17:06.416834 | orchestrator | 2026-04-16 09:17:06.416838 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 09:17:06.416841 | orchestrator | Thursday 16 April 2026 09:16:58 +0000 (0:00:01.245) 0:05:31.194 ******** 2026-04-16 09:17:06.416845 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-16 09:17:06.416850 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-16 09:17:06.416853 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-16 09:17:06.416857 | orchestrator | 2026-04-16 09:17:06.416861 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 09:17:06.416865 | orchestrator | Thursday 16 April 2026 09:16:59 +0000 (0:00:00.659) 0:05:31.853 ******** 2026-04-16 09:17:06.416869 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-16 09:17:06.416873 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-16 09:17:06.416876 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-16 09:17:06.416880 | orchestrator | 2026-04-16 09:17:06.416884 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 09:17:06.416888 | orchestrator | Thursday 16 April 2026 09:17:00 +0000 (0:00:01.207) 0:05:33.061 ******** 2026-04-16 09:17:06.416892 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-16 09:17:06.416896 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:06.416910 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-16 09:17:06.416915 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:06.416919 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-16 09:17:06.416922 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:06.416926 | orchestrator | 2026-04-16 09:17:06.416930 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-16 09:17:06.416934 | orchestrator | Thursday 16 April 2026 09:17:01 +0000 (0:00:00.456) 0:05:33.518 ******** 2026-04-16 09:17:06.416938 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 09:17:06.416942 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 09:17:06.416946 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 09:17:06.416950 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 09:17:06.416959 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 09:17:06.416963 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.416967 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 09:17:06.416971 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 09:17:06.416975 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.416979 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 09:17:06.416984 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 09:17:06.416988 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.416993 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 09:17:06.416997 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 09:17:06.417002 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 09:17:06.417006 | orchestrator | 2026-04-16 09:17:06.417010 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-16 09:17:06.417015 | orchestrator | Thursday 16 April 2026 09:17:02 +0000 (0:00:01.029) 0:05:34.548 ******** 2026-04-16 09:17:06.417019 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.417023 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.417028 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.417033 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.417037 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:06.417041 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:06.417046 | orchestrator | 2026-04-16 09:17:06.417050 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-16 09:17:06.417055 | orchestrator | Thursday 16 April 2026 09:17:03 +0000 (0:00:01.263) 0:05:35.811 ******** 2026-04-16 09:17:06.417059 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:06.417064 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:06.417071 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:06.417076 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:06.417080 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:06.417085 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:06.417089 | orchestrator | 2026-04-16 09:17:06.417093 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-16 09:17:06.417098 | orchestrator | Thursday 16 April 2026 09:17:05 +0000 (0:00:01.504) 0:05:37.315 ******** 2026-04-16 09:17:06.417105 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:06.417113 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:06.417128 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491527 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491707 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491727 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491739 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491783 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491857 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491873 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491893 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491905 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491925 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491938 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:08.491950 | orchestrator | 2026-04-16 09:17:08.491963 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:17:08.491976 | orchestrator | Thursday 16 April 2026 09:17:07 +0000 (0:00:02.206) 0:05:39.522 ******** 2026-04-16 09:17:08.491995 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:17:11.875918 | orchestrator | 2026-04-16 09:17:11.876031 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-16 09:17:11.876048 | orchestrator | Thursday 16 April 2026 09:17:08 +0000 (0:00:01.326) 0:05:40.848 ******** 2026-04-16 09:17:11.876065 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876098 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876112 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876145 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876177 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876190 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876208 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876222 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876233 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876266 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:11.876286 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:13.574864 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:13.574991 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:13.575033 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:13.575048 | orchestrator | 2026-04-16 09:17:13.575063 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-16 09:17:13.575076 | orchestrator | Thursday 16 April 2026 09:17:12 +0000 (0:00:03.556) 0:05:44.405 ******** 2026-04-16 09:17:13.575091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:13.575124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:13.575143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:13.575156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:13.575178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:13.575190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:13.575203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:17:13.575215 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:13.575238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:17:15.461435 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:15.461547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:17:15.461631 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:15.461653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:17:15.461669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:17:15.461685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:17:15.461700 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:15.461715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:17:15.461737 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:15.461775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:17:15.461799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:17:15.461826 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:15.461840 | orchestrator | 2026-04-16 09:17:15.461855 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-16 09:17:15.461871 | orchestrator | Thursday 16 April 2026 09:17:14 +0000 (0:00:02.177) 0:05:46.582 ******** 2026-04-16 09:17:15.461886 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:15.461903 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:15.461919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:15.461934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:17:15.461950 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:15.461982 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:20.330372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:17:20.330482 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:20.330500 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:20.330514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:20.330527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:17:20.330538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:17:20.330600 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:20.330649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:17:20.330662 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:20.330672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:17:20.330682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:17:20.330692 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:20.330701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:17:20.330712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:17:20.330722 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:20.330731 | orchestrator | 2026-04-16 09:17:20.330742 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:17:20.330754 | orchestrator | Thursday 16 April 2026 09:17:16 +0000 (0:00:02.486) 0:05:49.069 ******** 2026-04-16 09:17:20.330771 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:20.330781 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:20.330791 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:20.330801 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:17:20.330811 | orchestrator | 2026-04-16 09:17:20.330822 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-16 09:17:20.330831 | orchestrator | Thursday 16 April 2026 09:17:18 +0000 (0:00:01.442) 0:05:50.511 ******** 2026-04-16 09:17:20.330841 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:17:20.330850 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:17:20.330859 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:17:20.330869 | orchestrator | 2026-04-16 09:17:20.330878 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-16 09:17:20.330888 | orchestrator | Thursday 16 April 2026 09:17:19 +0000 (0:00:01.070) 0:05:51.582 ******** 2026-04-16 09:17:20.330898 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:17:20.330909 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:17:20.330924 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:17:20.330935 | orchestrator | 2026-04-16 09:17:20.330945 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-16 09:17:20.330966 | orchestrator | Thursday 16 April 2026 09:17:20 +0000 (0:00:00.957) 0:05:52.539 ******** 2026-04-16 09:17:44.819176 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:44.819345 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:44.819364 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:44.819376 | orchestrator | 2026-04-16 09:17:44.819388 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-16 09:17:44.819401 | orchestrator | Thursday 16 April 2026 09:17:20 +0000 (0:00:00.513) 0:05:53.053 ******** 2026-04-16 09:17:44.819412 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:44.819424 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:44.819435 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:44.819446 | orchestrator | 2026-04-16 09:17:44.819457 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-16 09:17:44.819468 | orchestrator | Thursday 16 April 2026 09:17:21 +0000 (0:00:00.733) 0:05:53.786 ******** 2026-04-16 09:17:44.819479 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:17:44.819491 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:17:44.819502 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:17:44.819513 | orchestrator | 2026-04-16 09:17:44.819524 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-16 09:17:44.819535 | orchestrator | Thursday 16 April 2026 09:17:22 +0000 (0:00:01.151) 0:05:54.938 ******** 2026-04-16 09:17:44.819546 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:17:44.819606 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:17:44.819619 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:17:44.819632 | orchestrator | 2026-04-16 09:17:44.819644 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-16 09:17:44.819657 | orchestrator | Thursday 16 April 2026 09:17:23 +0000 (0:00:01.206) 0:05:56.144 ******** 2026-04-16 09:17:44.819670 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:17:44.819683 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:17:44.819695 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:17:44.819707 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-16 09:17:44.819719 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-16 09:17:44.819731 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-16 09:17:44.819743 | orchestrator | 2026-04-16 09:17:44.819756 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-16 09:17:44.819797 | orchestrator | Thursday 16 April 2026 09:17:27 +0000 (0:00:03.946) 0:06:00.090 ******** 2026-04-16 09:17:44.819810 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:44.819824 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:44.819836 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:44.819849 | orchestrator | 2026-04-16 09:17:44.819861 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-16 09:17:44.819874 | orchestrator | Thursday 16 April 2026 09:17:28 +0000 (0:00:00.514) 0:06:00.605 ******** 2026-04-16 09:17:44.819886 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:44.819900 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:44.819919 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:44.819939 | orchestrator | 2026-04-16 09:17:44.820031 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-16 09:17:44.820055 | orchestrator | Thursday 16 April 2026 09:17:28 +0000 (0:00:00.308) 0:06:00.914 ******** 2026-04-16 09:17:44.820068 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:44.820079 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:44.820090 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:44.820100 | orchestrator | 2026-04-16 09:17:44.820111 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-16 09:17:44.820122 | orchestrator | Thursday 16 April 2026 09:17:30 +0000 (0:00:01.403) 0:06:02.318 ******** 2026-04-16 09:17:44.820135 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-16 09:17:44.820148 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-16 09:17:44.820159 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-16 09:17:44.820171 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-16 09:17:44.820183 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-16 09:17:44.820194 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-16 09:17:44.820204 | orchestrator | 2026-04-16 09:17:44.820215 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-16 09:17:44.820226 | orchestrator | Thursday 16 April 2026 09:17:33 +0000 (0:00:03.678) 0:06:05.996 ******** 2026-04-16 09:17:44.820251 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 09:17:44.820263 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 09:17:44.820274 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 09:17:44.820284 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 09:17:44.820317 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:17:44.820329 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 09:17:44.820339 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:17:44.820350 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 09:17:44.820361 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:17:44.820371 | orchestrator | 2026-04-16 09:17:44.820382 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-16 09:17:44.820393 | orchestrator | Thursday 16 April 2026 09:17:36 +0000 (0:00:03.139) 0:06:09.135 ******** 2026-04-16 09:17:44.820404 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:44.820414 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:44.820436 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:44.820448 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:17:44.820459 | orchestrator | 2026-04-16 09:17:44.820470 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-16 09:17:44.820481 | orchestrator | Thursday 16 April 2026 09:17:39 +0000 (0:00:02.490) 0:06:11.626 ******** 2026-04-16 09:17:44.820491 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:17:44.820503 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:17:44.820513 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:17:44.820524 | orchestrator | 2026-04-16 09:17:44.820535 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-16 09:17:44.820546 | orchestrator | Thursday 16 April 2026 09:17:40 +0000 (0:00:00.948) 0:06:12.575 ******** 2026-04-16 09:17:44.820580 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:44.820591 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:44.820602 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:44.820613 | orchestrator | 2026-04-16 09:17:44.820624 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-16 09:17:44.820634 | orchestrator | Thursday 16 April 2026 09:17:40 +0000 (0:00:00.308) 0:06:12.883 ******** 2026-04-16 09:17:44.820645 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:44.820656 | orchestrator | 2026-04-16 09:17:44.820667 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-16 09:17:44.820677 | orchestrator | Thursday 16 April 2026 09:17:40 +0000 (0:00:00.121) 0:06:13.004 ******** 2026-04-16 09:17:44.820688 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:44.820699 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:44.820709 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:44.820720 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:44.820731 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:44.820741 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:44.820752 | orchestrator | 2026-04-16 09:17:44.820762 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-16 09:17:44.820773 | orchestrator | Thursday 16 April 2026 09:17:41 +0000 (0:00:00.785) 0:06:13.790 ******** 2026-04-16 09:17:44.820784 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:17:44.820795 | orchestrator | 2026-04-16 09:17:44.820805 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-16 09:17:44.820816 | orchestrator | Thursday 16 April 2026 09:17:42 +0000 (0:00:00.733) 0:06:14.523 ******** 2026-04-16 09:17:44.820827 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:17:44.820837 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:17:44.820850 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:17:44.820869 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:17:44.820885 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:17:44.820901 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:17:44.820918 | orchestrator | 2026-04-16 09:17:44.820934 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-16 09:17:44.820951 | orchestrator | Thursday 16 April 2026 09:17:43 +0000 (0:00:00.762) 0:06:15.285 ******** 2026-04-16 09:17:44.820973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:44.821031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:46.718793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:17:46.718908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:46.718929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:46.718942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:46.718981 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719011 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719112 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719130 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:46.719146 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:51.509104 | orchestrator | 2026-04-16 09:17:51.509226 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-16 09:17:51.509311 | orchestrator | Thursday 16 April 2026 09:17:46 +0000 (0:00:03.740) 0:06:19.026 ******** 2026-04-16 09:17:51.509358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:51.509379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:51.509395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:51.509439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:51.509454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:17:51.509494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:17:51.509509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:51.509525 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:51.509551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:17:51.509603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:17:51.509629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:18:06.405782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:18:06.405969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:06.405999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:06.406125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:06.406144 | orchestrator | 2026-04-16 09:18:06.406156 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-16 09:18:06.406169 | orchestrator | Thursday 16 April 2026 09:17:53 +0000 (0:00:06.853) 0:06:25.879 ******** 2026-04-16 09:18:06.406179 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:06.406191 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:06.406203 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:06.406215 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:06.406226 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:06.406238 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:06.406249 | orchestrator | 2026-04-16 09:18:06.406261 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-16 09:18:06.406273 | orchestrator | Thursday 16 April 2026 09:17:55 +0000 (0:00:01.453) 0:06:27.332 ******** 2026-04-16 09:18:06.406285 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 09:18:06.406297 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 09:18:06.406309 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 09:18:06.406321 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:06.406333 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 09:18:06.406344 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 09:18:06.406356 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 09:18:06.406368 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:06.406380 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 09:18:06.406391 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 09:18:06.406403 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:06.406414 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 09:18:06.406449 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 09:18:06.406461 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 09:18:06.406471 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 09:18:06.406480 | orchestrator | 2026-04-16 09:18:06.406490 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-16 09:18:06.406500 | orchestrator | Thursday 16 April 2026 09:17:58 +0000 (0:00:03.415) 0:06:30.748 ******** 2026-04-16 09:18:06.406531 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:06.406582 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:06.406599 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:06.406615 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:06.406631 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:06.406645 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:06.406661 | orchestrator | 2026-04-16 09:18:06.406676 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-16 09:18:06.406692 | orchestrator | Thursday 16 April 2026 09:17:59 +0000 (0:00:00.545) 0:06:31.293 ******** 2026-04-16 09:18:06.406709 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 09:18:06.406725 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 09:18:06.406742 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 09:18:06.406759 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 09:18:06.406776 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 09:18:06.406793 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 09:18:06.406810 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 09:18:06.406825 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 09:18:06.406843 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 09:18:06.406859 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 09:18:06.406875 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:06.406897 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 09:18:06.406919 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:06.406934 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 09:18:06.406950 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:06.406966 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:18:06.406982 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:18:06.407000 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:18:06.407015 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:18:06.407030 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:18:06.407046 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:18:06.407061 | orchestrator | 2026-04-16 09:18:06.407076 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-16 09:18:06.407092 | orchestrator | Thursday 16 April 2026 09:18:03 +0000 (0:00:04.878) 0:06:36.171 ******** 2026-04-16 09:18:06.407105 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 09:18:06.407120 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 09:18:06.407135 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 09:18:06.407168 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:18:06.407186 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 09:18:06.407202 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:18:06.407218 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 09:18:06.407233 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:18:06.407251 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 09:18:06.407268 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 09:18:06.407303 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 09:18:16.257860 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 09:18:16.257963 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:16.257975 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 09:18:16.257983 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 09:18:16.257991 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:16.257998 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:18:16.258006 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 09:18:16.258013 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:16.258099 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:18:16.258107 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:18:16.258115 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:18:16.258123 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:18:16.258130 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:18:16.258138 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:18:16.258145 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:18:16.258153 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:18:16.258166 | orchestrator | 2026-04-16 09:18:16.258174 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-16 09:18:16.258182 | orchestrator | Thursday 16 April 2026 09:18:10 +0000 (0:00:06.247) 0:06:42.419 ******** 2026-04-16 09:18:16.258189 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:16.258196 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:16.258204 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:16.258211 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:16.258218 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:16.258225 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:16.258233 | orchestrator | 2026-04-16 09:18:16.258240 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-16 09:18:16.258247 | orchestrator | Thursday 16 April 2026 09:18:10 +0000 (0:00:00.659) 0:06:43.078 ******** 2026-04-16 09:18:16.258256 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:16.258263 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:16.258270 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:16.258278 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:16.258285 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:16.258292 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:16.258299 | orchestrator | 2026-04-16 09:18:16.258307 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-16 09:18:16.258336 | orchestrator | Thursday 16 April 2026 09:18:11 +0000 (0:00:00.546) 0:06:43.625 ******** 2026-04-16 09:18:16.258344 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:16.258351 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:16.258358 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:16.258365 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:18:16.258374 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:18:16.258381 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:18:16.258389 | orchestrator | 2026-04-16 09:18:16.258398 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-16 09:18:16.258406 | orchestrator | Thursday 16 April 2026 09:18:13 +0000 (0:00:01.814) 0:06:45.440 ******** 2026-04-16 09:18:16.258414 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:16.258423 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:16.258432 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:18:16.258440 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:16.258449 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:18:16.258457 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:18:16.258465 | orchestrator | 2026-04-16 09:18:16.258474 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-16 09:18:16.258483 | orchestrator | Thursday 16 April 2026 09:18:15 +0000 (0:00:02.196) 0:06:47.636 ******** 2026-04-16 09:18:16.258494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:18:16.258578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:18:16.258590 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:18:16.258600 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:18:16.258616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:18:16.258625 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:16.258634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:18:16.258643 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:16.258658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:18:19.327081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:18:19.327158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:18:19.327184 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:19.327191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:18:19.327196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:18:19.327201 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:19.327205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:18:19.327209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:18:19.327213 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:19.327227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:18:19.327231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:18:19.327239 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:19.327243 | orchestrator | 2026-04-16 09:18:19.327247 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-16 09:18:19.327253 | orchestrator | Thursday 16 April 2026 09:18:16 +0000 (0:00:01.421) 0:06:49.058 ******** 2026-04-16 09:18:19.327256 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-16 09:18:19.327261 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-16 09:18:19.327265 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:19.327269 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-16 09:18:19.327272 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-16 09:18:19.327276 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:19.327280 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-16 09:18:19.327284 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-16 09:18:19.327288 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:19.327291 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-16 09:18:19.327295 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-16 09:18:19.327299 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:18:19.327303 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-16 09:18:19.327307 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-16 09:18:19.327310 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:18:19.327314 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-16 09:18:19.327318 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-16 09:18:19.327322 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:18:19.327325 | orchestrator | 2026-04-16 09:18:19.327329 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-16 09:18:19.327333 | orchestrator | Thursday 16 April 2026 09:18:17 +0000 (0:00:00.846) 0:06:49.904 ******** 2026-04-16 09:18:19.327338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:18:19.327371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922378 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922384 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:20.922416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:18:22.957830 | orchestrator | 2026-04-16 09:18:22.957944 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-16 09:18:22.957978 | orchestrator | Thursday 16 April 2026 09:18:21 +0000 (0:00:03.346) 0:06:53.251 ******** 2026-04-16 09:18:22.958083 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 09:18:22.958109 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:18:22.958130 | orchestrator | } 2026-04-16 09:18:22.958149 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 09:18:22.958170 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:18:22.958190 | orchestrator | } 2026-04-16 09:18:22.958210 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 09:18:22.958225 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:18:22.958236 | orchestrator | } 2026-04-16 09:18:22.958247 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:18:22.958258 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:18:22.958268 | orchestrator | } 2026-04-16 09:18:22.958279 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:18:22.958290 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:18:22.958303 | orchestrator | } 2026-04-16 09:18:22.958322 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:18:22.958350 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:18:22.958370 | orchestrator | } 2026-04-16 09:18:22.958387 | orchestrator | 2026-04-16 09:18:22.958406 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:18:22.958424 | orchestrator | Thursday 16 April 2026 09:18:21 +0000 (0:00:00.846) 0:06:54.097 ******** 2026-04-16 09:18:22.958446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:18:22.958473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:18:22.958527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:18:22.958543 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:18:22.958605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:18:22.958619 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:18:22.958630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:18:22.958642 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:18:22.958653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:18:22.958673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:18:22.958685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:18:22.958696 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:18:22.958716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:21:13.767382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:21:13.767576 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:13.767599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:21:13.767640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:21:13.767654 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:13.767666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:21:13.767678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:21:13.767689 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:13.767701 | orchestrator | 2026-04-16 09:21:13.767713 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:21:13.767726 | orchestrator | Thursday 16 April 2026 09:18:24 +0000 (0:00:02.201) 0:06:56.298 ******** 2026-04-16 09:21:13.767737 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:21:13.767748 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:21:13.767759 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:21:13.767770 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:13.767781 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:13.767792 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:13.767803 | orchestrator | 2026-04-16 09:21:13.767814 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:21:13.767825 | orchestrator | Thursday 16 April 2026 09:18:24 +0000 (0:00:00.653) 0:06:56.952 ******** 2026-04-16 09:21:13.767836 | orchestrator | 2026-04-16 09:21:13.767847 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:21:13.767857 | orchestrator | Thursday 16 April 2026 09:18:25 +0000 (0:00:00.316) 0:06:57.269 ******** 2026-04-16 09:21:13.767868 | orchestrator | 2026-04-16 09:21:13.767879 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:21:13.767909 | orchestrator | Thursday 16 April 2026 09:18:25 +0000 (0:00:00.166) 0:06:57.435 ******** 2026-04-16 09:21:13.767922 | orchestrator | 2026-04-16 09:21:13.767935 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:21:13.767947 | orchestrator | Thursday 16 April 2026 09:18:25 +0000 (0:00:00.165) 0:06:57.601 ******** 2026-04-16 09:21:13.767960 | orchestrator | 2026-04-16 09:21:13.767972 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:21:13.767994 | orchestrator | Thursday 16 April 2026 09:18:25 +0000 (0:00:00.158) 0:06:57.760 ******** 2026-04-16 09:21:13.768007 | orchestrator | 2026-04-16 09:21:13.768019 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:21:13.768031 | orchestrator | Thursday 16 April 2026 09:18:25 +0000 (0:00:00.144) 0:06:57.905 ******** 2026-04-16 09:21:13.768043 | orchestrator | 2026-04-16 09:21:13.768056 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-04-16 09:21:13.768071 | orchestrator | Thursday 16 April 2026 09:18:25 +0000 (0:00:00.301) 0:06:58.206 ******** 2026-04-16 09:21:13.768090 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:21:13.768108 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:21:13.768126 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:21:13.768144 | orchestrator | 2026-04-16 09:21:13.768162 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-04-16 09:21:13.768180 | orchestrator | Thursday 16 April 2026 09:18:39 +0000 (0:00:13.528) 0:07:11.734 ******** 2026-04-16 09:21:13.768199 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:21:13.768218 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:21:13.768237 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:21:13.768256 | orchestrator | 2026-04-16 09:21:13.768274 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-04-16 09:21:13.768291 | orchestrator | Thursday 16 April 2026 09:19:00 +0000 (0:00:21.125) 0:07:32.860 ******** 2026-04-16 09:21:13.768310 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:21:13.768328 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:21:13.768346 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:21:13.768366 | orchestrator | 2026-04-16 09:21:13.768384 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-04-16 09:21:13.768402 | orchestrator | Thursday 16 April 2026 09:19:27 +0000 (0:00:26.663) 0:07:59.523 ******** 2026-04-16 09:21:13.768419 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:21:13.768438 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:21:13.768456 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:21:13.768467 | orchestrator | 2026-04-16 09:21:13.768478 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-04-16 09:21:13.768516 | orchestrator | Thursday 16 April 2026 09:20:09 +0000 (0:00:42.352) 0:08:41.876 ******** 2026-04-16 09:21:13.768528 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:21:13.768539 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-04-16 09:21:13.768552 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-04-16 09:21:13.768563 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:21:13.768574 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:21:13.768584 | orchestrator | 2026-04-16 09:21:13.768595 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-04-16 09:21:13.768606 | orchestrator | Thursday 16 April 2026 09:20:16 +0000 (0:00:06.342) 0:08:48.218 ******** 2026-04-16 09:21:13.768617 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:21:13.768628 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:21:13.768639 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:21:13.768649 | orchestrator | 2026-04-16 09:21:13.768660 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-04-16 09:21:13.768671 | orchestrator | Thursday 16 April 2026 09:20:16 +0000 (0:00:00.798) 0:08:49.017 ******** 2026-04-16 09:21:13.768682 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:21:13.768693 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:21:13.768704 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:21:13.768714 | orchestrator | 2026-04-16 09:21:13.768725 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-04-16 09:21:13.768737 | orchestrator | Thursday 16 April 2026 09:20:52 +0000 (0:00:35.656) 0:09:24.673 ******** 2026-04-16 09:21:13.768747 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:21:13.768769 | orchestrator | 2026-04-16 09:21:13.768780 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-04-16 09:21:13.768791 | orchestrator | Thursday 16 April 2026 09:20:53 +0000 (0:00:00.690) 0:09:25.363 ******** 2026-04-16 09:21:13.768802 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:13.768813 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:21:13.768823 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:21:13.768834 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:13.768845 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:13.768856 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 09:21:13.768867 | orchestrator | 2026-04-16 09:21:13.768878 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-04-16 09:21:13.768889 | orchestrator | Thursday 16 April 2026 09:21:01 +0000 (0:00:08.349) 0:09:33.713 ******** 2026-04-16 09:21:13.768899 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:21:13.768910 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:21:13.768921 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:13.768932 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:13.768943 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:21:13.768954 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:13.768965 | orchestrator | 2026-04-16 09:21:13.768976 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-04-16 09:21:13.768986 | orchestrator | Thursday 16 April 2026 09:21:10 +0000 (0:00:08.840) 0:09:42.553 ******** 2026-04-16 09:21:13.768997 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:21:13.769008 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:21:13.769019 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:13.769030 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:13.769041 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:13.769061 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-04-16 09:21:51.172402 | orchestrator | 2026-04-16 09:21:51.172662 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-04-16 09:21:51.172690 | orchestrator | Thursday 16 April 2026 09:21:13 +0000 (0:00:03.484) 0:09:46.037 ******** 2026-04-16 09:21:51.172710 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 09:21:51.172727 | orchestrator | 2026-04-16 09:21:51.172745 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-04-16 09:21:51.172763 | orchestrator | Thursday 16 April 2026 09:21:26 +0000 (0:00:12.650) 0:09:58.688 ******** 2026-04-16 09:21:51.172780 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 09:21:51.172796 | orchestrator | 2026-04-16 09:21:51.172812 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-04-16 09:21:51.172830 | orchestrator | Thursday 16 April 2026 09:21:28 +0000 (0:00:01.920) 0:10:00.608 ******** 2026-04-16 09:21:51.172847 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:21:51.172865 | orchestrator | 2026-04-16 09:21:51.172882 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-04-16 09:21:51.172899 | orchestrator | Thursday 16 April 2026 09:21:30 +0000 (0:00:01.674) 0:10:02.283 ******** 2026-04-16 09:21:51.172916 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-04-16 09:21:51.172935 | orchestrator | 2026-04-16 09:21:51.172953 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-16 09:21:51.172973 | orchestrator | 2026-04-16 09:21:51.172991 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-16 09:21:51.173010 | orchestrator | Thursday 16 April 2026 09:21:42 +0000 (0:00:12.759) 0:10:15.042 ******** 2026-04-16 09:21:51.173029 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:21:51.173047 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:21:51.173066 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:21:51.173082 | orchestrator | 2026-04-16 09:21:51.173099 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-16 09:21:51.173148 | orchestrator | 2026-04-16 09:21:51.173166 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-16 09:21:51.173183 | orchestrator | Thursday 16 April 2026 09:21:44 +0000 (0:00:01.523) 0:10:16.565 ******** 2026-04-16 09:21:51.173199 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:51.173216 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:51.173231 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:51.173246 | orchestrator | 2026-04-16 09:21:51.173262 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-16 09:21:51.173276 | orchestrator | 2026-04-16 09:21:51.173292 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-16 09:21:51.173309 | orchestrator | Thursday 16 April 2026 09:21:45 +0000 (0:00:00.949) 0:10:17.515 ******** 2026-04-16 09:21:51.173327 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-16 09:21:51.173345 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-16 09:21:51.173362 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-16 09:21:51.173379 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-16 09:21:51.173396 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-16 09:21:51.173413 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-16 09:21:51.173430 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:21:51.173447 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-16 09:21:51.173464 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-16 09:21:51.173508 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-16 09:21:51.173525 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-16 09:21:51.173542 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-16 09:21:51.173558 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-16 09:21:51.173574 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:21:51.173591 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-16 09:21:51.173608 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-16 09:21:51.173625 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-16 09:21:51.173642 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-16 09:21:51.173659 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-16 09:21:51.173675 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-16 09:21:51.173691 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:21:51.173708 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-04-16 09:21:51.173725 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-16 09:21:51.173741 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-16 09:21:51.173758 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-04-16 09:21:51.173774 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-16 09:21:51.173791 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-16 09:21:51.173807 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:51.173824 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-04-16 09:21:51.173841 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-16 09:21:51.173857 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-16 09:21:51.173874 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-04-16 09:21:51.173907 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-16 09:21:51.173925 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-16 09:21:51.173941 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:51.173958 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-04-16 09:21:51.174011 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-16 09:21:51.174105 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-16 09:21:51.174123 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-04-16 09:21:51.174141 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-16 09:21:51.174159 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-16 09:21:51.174178 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:51.174196 | orchestrator | 2026-04-16 09:21:51.174214 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-16 09:21:51.174232 | orchestrator | 2026-04-16 09:21:51.174250 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-16 09:21:51.174268 | orchestrator | Thursday 16 April 2026 09:21:47 +0000 (0:00:01.727) 0:10:19.243 ******** 2026-04-16 09:21:51.174286 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-04-16 09:21:51.174304 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-16 09:21:51.174322 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:51.174341 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-04-16 09:21:51.174359 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-16 09:21:51.174376 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:51.174395 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-04-16 09:21:51.174412 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-16 09:21:51.174430 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:51.174448 | orchestrator | 2026-04-16 09:21:51.174466 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-16 09:21:51.174505 | orchestrator | 2026-04-16 09:21:51.174522 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-16 09:21:51.174539 | orchestrator | Thursday 16 April 2026 09:21:48 +0000 (0:00:01.152) 0:10:20.395 ******** 2026-04-16 09:21:51.174555 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:51.174572 | orchestrator | 2026-04-16 09:21:51.174589 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-16 09:21:51.174605 | orchestrator | 2026-04-16 09:21:51.174622 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-16 09:21:51.174639 | orchestrator | Thursday 16 April 2026 09:21:49 +0000 (0:00:01.259) 0:10:21.654 ******** 2026-04-16 09:21:51.174656 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:21:51.174673 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:21:51.174690 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:21:51.174706 | orchestrator | 2026-04-16 09:21:51.174722 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:21:51.174739 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:21:51.174758 | orchestrator | testbed-node-0 : ok=58  changed=25  unreachable=0 failed=0 skipped=53  rescued=0 ignored=0 2026-04-16 09:21:51.174776 | orchestrator | testbed-node-1 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-16 09:21:51.174792 | orchestrator | testbed-node-2 : ok=31  changed=21  unreachable=0 failed=0 skipped=61  rescued=0 ignored=0 2026-04-16 09:21:51.174809 | orchestrator | testbed-node-3 : ok=49  changed=15  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-04-16 09:21:51.174825 | orchestrator | testbed-node-4 : ok=43  changed=14  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-16 09:21:51.174842 | orchestrator | testbed-node-5 : ok=48  changed=14  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-04-16 09:21:51.174870 | orchestrator | 2026-04-16 09:21:51.174887 | orchestrator | 2026-04-16 09:21:51.174904 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:21:51.174921 | orchestrator | Thursday 16 April 2026 09:21:51 +0000 (0:00:01.709) 0:10:23.364 ******** 2026-04-16 09:21:51.174937 | orchestrator | =============================================================================== 2026-04-16 09:21:51.174953 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.35s 2026-04-16 09:21:51.174970 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 35.66s 2026-04-16 09:21:51.174986 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.15s 2026-04-16 09:21:51.175003 | orchestrator | nova-cell : Get new Libvirt version ------------------------------------ 32.03s 2026-04-16 09:21:51.175019 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 27.42s 2026-04-16 09:21:51.175036 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.66s 2026-04-16 09:21:51.175052 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 21.13s 2026-04-16 09:21:51.175068 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.69s 2026-04-16 09:21:51.175085 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.74s 2026-04-16 09:21:51.175101 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.53s 2026-04-16 09:21:51.175118 | orchestrator | nova-cell : Update cell ------------------------------------------------ 13.23s 2026-04-16 09:21:51.175144 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.89s 2026-04-16 09:21:51.457556 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.76s 2026-04-16 09:21:51.457641 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.65s 2026-04-16 09:21:51.457651 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------ 12.42s 2026-04-16 09:21:51.457658 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.30s 2026-04-16 09:21:51.457666 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 11.96s 2026-04-16 09:21:51.457673 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.84s 2026-04-16 09:21:51.457681 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.79s 2026-04-16 09:21:51.457688 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves ---- 8.35s 2026-04-16 09:21:51.623151 | orchestrator | + osism apply nova-update-cell-mappings 2026-04-16 09:22:02.926398 | orchestrator | 2026-04-16 09:22:02 | INFO  | Prepare task for execution of nova-update-cell-mappings. 2026-04-16 09:22:03.004063 | orchestrator | 2026-04-16 09:22:03 | INFO  | Task 9730ec10-e12a-4b59-ba63-bf8d242376b4 (nova-update-cell-mappings) was prepared for execution. 2026-04-16 09:22:03.004177 | orchestrator | 2026-04-16 09:22:03 | INFO  | It takes a moment until task 9730ec10-e12a-4b59-ba63-bf8d242376b4 (nova-update-cell-mappings) has been started and output is visible here. 2026-04-16 09:22:27.525042 | orchestrator | 2026-04-16 09:22:27.525188 | orchestrator | PLAY [Update Nova cell mappings] *********************************************** 2026-04-16 09:22:27.525214 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:22:27.525236 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:22:27.525273 | orchestrator | 2026-04-16 09:22:27.525291 | orchestrator | TASK [Get list of Nova cells] ************************************************** 2026-04-16 09:22:27.525309 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:22:27.525363 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:22:27.525403 | orchestrator | Thursday 16 April 2026 09:22:07 +0000 (0:00:01.121) 0:00:01.121 ******** 2026-04-16 09:22:27.525422 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:22:27.525444 | orchestrator | 2026-04-16 09:22:27.525462 | orchestrator | TASK [Parse cell information] ************************************************** 2026-04-16 09:22:27.525531 | orchestrator | Thursday 16 April 2026 09:22:21 +0000 (0:00:13.909) 0:00:15.031 ******** 2026-04-16 09:22:27.525550 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:22:27.525569 | orchestrator | 2026-04-16 09:22:27.525589 | orchestrator | TASK [Display cells to update] ************************************************* 2026-04-16 09:22:27.525609 | orchestrator | Thursday 16 April 2026 09:22:21 +0000 (0:00:00.163) 0:00:15.194 ******** 2026-04-16 09:22:27.525629 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 09:22:27.525649 | orchestrator |  "msg": "Cells to update: [{'name': '', 'uuid': 'f447964d-49c8-4c84-a475-bd8c7c9cfb34'}]" 2026-04-16 09:22:27.525670 | orchestrator | } 2026-04-16 09:22:27.525690 | orchestrator | 2026-04-16 09:22:27.525709 | orchestrator | TASK [Use specified cell UUID if provided] ************************************* 2026-04-16 09:22:27.525729 | orchestrator | Thursday 16 April 2026 09:22:21 +0000 (0:00:00.128) 0:00:15.323 ******** 2026-04-16 09:22:27.525748 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:22:27.525768 | orchestrator | 2026-04-16 09:22:27.525788 | orchestrator | TASK [Abort if multiple cells found without specific UUID and abort_on_multiple is enabled] *** 2026-04-16 09:22:27.525807 | orchestrator | Thursday 16 April 2026 09:22:21 +0000 (0:00:00.125) 0:00:15.449 ******** 2026-04-16 09:22:27.525827 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:22:27.525844 | orchestrator | 2026-04-16 09:22:27.525861 | orchestrator | TASK [Update Nova cell mappings] *********************************************** 2026-04-16 09:22:27.525879 | orchestrator | Thursday 16 April 2026 09:22:21 +0000 (0:00:00.115) 0:00:15.564 ******** 2026-04-16 09:22:27.525896 | orchestrator | changed: [testbed-node-0] => (item=f447964d-49c8-4c84-a475-bd8c7c9cfb34) 2026-04-16 09:22:27.525913 | orchestrator | 2026-04-16 09:22:27.525932 | orchestrator | TASK [Display update results] ************************************************** 2026-04-16 09:22:27.525949 | orchestrator | Thursday 16 April 2026 09:22:26 +0000 (0:00:04.521) 0:00:20.085 ******** 2026-04-16 09:22:27.525967 | orchestrator | ok: [testbed-node-0] => (item=f447964d-49c8-4c84-a475-bd8c7c9cfb34) => { 2026-04-16 09:22:27.525986 | orchestrator |  "msg": "Cell f447964d-49c8-4c84-a475-bd8c7c9cfb34 updated successfully" 2026-04-16 09:22:27.526003 | orchestrator | } 2026-04-16 09:22:27.526114 | orchestrator | 2026-04-16 09:22:27.526132 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:22:27.526150 | orchestrator | testbed-node-0 : ok=5  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 09:22:27.526167 | orchestrator | 2026-04-16 09:22:27.526183 | orchestrator | 2026-04-16 09:22:27.526199 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:22:27.526215 | orchestrator | Thursday 16 April 2026 09:22:27 +0000 (0:00:00.846) 0:00:20.932 ******** 2026-04-16 09:22:27.526231 | orchestrator | =============================================================================== 2026-04-16 09:22:27.526246 | orchestrator | Get list of Nova cells ------------------------------------------------- 13.91s 2026-04-16 09:22:27.526262 | orchestrator | Update Nova cell mappings ----------------------------------------------- 4.52s 2026-04-16 09:22:27.526278 | orchestrator | Display update results -------------------------------------------------- 0.85s 2026-04-16 09:22:27.526295 | orchestrator | Parse cell information -------------------------------------------------- 0.16s 2026-04-16 09:22:27.526311 | orchestrator | Display cells to update ------------------------------------------------- 0.13s 2026-04-16 09:22:27.526326 | orchestrator | Use specified cell UUID if provided ------------------------------------- 0.13s 2026-04-16 09:22:27.526342 | orchestrator | Abort if multiple cells found without specific UUID and abort_on_multiple is enabled --- 0.12s 2026-04-16 09:22:27.683743 | orchestrator | + osism apply -a upgrade nova 2026-04-16 09:22:28.965861 | orchestrator | 2026-04-16 09:22:28 | INFO  | Prepare task for execution of nova. 2026-04-16 09:22:29.029163 | orchestrator | 2026-04-16 09:22:29 | INFO  | Task ba1e06e0-7760-4fa1-bdcf-5f622118f748 (nova) was prepared for execution. 2026-04-16 09:22:29.029268 | orchestrator | 2026-04-16 09:22:29 | INFO  | It takes a moment until task ba1e06e0-7760-4fa1-bdcf-5f622118f748 (nova) has been started and output is visible here. 2026-04-16 09:23:41.219577 | orchestrator | 2026-04-16 09:23:41.219686 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:23:41.219700 | orchestrator | 2026-04-16 09:23:41.219709 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-04-16 09:23:41.219717 | orchestrator | Thursday 16 April 2026 09:22:34 +0000 (0:00:01.992) 0:00:01.992 ******** 2026-04-16 09:23:41.219726 | orchestrator | changed: [testbed-manager] 2026-04-16 09:23:41.219736 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:23:41.219744 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:23:41.219752 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:23:41.219760 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:23:41.219768 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:23:41.219776 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:23:41.219783 | orchestrator | 2026-04-16 09:23:41.219792 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:23:41.219800 | orchestrator | Thursday 16 April 2026 09:22:37 +0000 (0:00:03.314) 0:00:05.307 ******** 2026-04-16 09:23:41.219808 | orchestrator | changed: [testbed-manager] 2026-04-16 09:23:41.219816 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:23:41.219823 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:23:41.219831 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:23:41.219839 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:23:41.219847 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:23:41.219854 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:23:41.219862 | orchestrator | 2026-04-16 09:23:41.219870 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:23:41.219878 | orchestrator | Thursday 16 April 2026 09:22:39 +0000 (0:00:02.072) 0:00:07.380 ******** 2026-04-16 09:23:41.219886 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-04-16 09:23:41.219895 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-04-16 09:23:41.219903 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-04-16 09:23:41.219911 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-04-16 09:23:41.219918 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-04-16 09:23:41.219926 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-04-16 09:23:41.219934 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-04-16 09:23:41.219942 | orchestrator | 2026-04-16 09:23:41.219950 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-04-16 09:23:41.219958 | orchestrator | 2026-04-16 09:23:41.219966 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-16 09:23:41.219974 | orchestrator | Thursday 16 April 2026 09:22:43 +0000 (0:00:03.416) 0:00:10.796 ******** 2026-04-16 09:23:41.219982 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:23:41.219990 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:41.219998 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:41.220006 | orchestrator | 2026-04-16 09:23:41.220014 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-16 09:23:41.220022 | orchestrator | Thursday 16 April 2026 09:22:44 +0000 (0:00:01.603) 0:00:12.400 ******** 2026-04-16 09:23:41.220030 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:23:41.220038 | orchestrator | 2026-04-16 09:23:41.220070 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-16 09:23:41.220080 | orchestrator | Thursday 16 April 2026 09:22:47 +0000 (0:00:02.303) 0:00:14.703 ******** 2026-04-16 09:23:41.220090 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220099 | orchestrator | 2026-04-16 09:23:41.220108 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-04-16 09:23:41.220117 | orchestrator | Thursday 16 April 2026 09:22:48 +0000 (0:00:01.914) 0:00:16.618 ******** 2026-04-16 09:23:41.220126 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220136 | orchestrator | 2026-04-16 09:23:41.220145 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-04-16 09:23:41.220154 | orchestrator | Thursday 16 April 2026 09:22:50 +0000 (0:00:01.999) 0:00:18.618 ******** 2026-04-16 09:23:41.220163 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220172 | orchestrator | 2026-04-16 09:23:41.220181 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-04-16 09:23:41.220190 | orchestrator | Thursday 16 April 2026 09:22:54 +0000 (0:00:03.802) 0:00:22.420 ******** 2026-04-16 09:23:41.220200 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220208 | orchestrator | 2026-04-16 09:23:41.220217 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-04-16 09:23:41.220226 | orchestrator | 2026-04-16 09:23:41.220236 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-04-16 09:23:41.220245 | orchestrator | Thursday 16 April 2026 09:23:14 +0000 (0:00:20.022) 0:00:42.442 ******** 2026-04-16 09:23:41.220254 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:23:41.220263 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:41.220273 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:41.220282 | orchestrator | 2026-04-16 09:23:41.220291 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-04-16 09:23:41.220300 | orchestrator | Thursday 16 April 2026 09:23:16 +0000 (0:00:01.298) 0:00:43.741 ******** 2026-04-16 09:23:41.220308 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:23:41.220316 | orchestrator | 2026-04-16 09:23:41.220324 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-16 09:23:41.220331 | orchestrator | Thursday 16 April 2026 09:23:17 +0000 (0:00:01.638) 0:00:45.380 ******** 2026-04-16 09:23:41.220339 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:41.220347 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:41.220355 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220363 | orchestrator | 2026-04-16 09:23:41.220371 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-04-16 09:23:41.220379 | orchestrator | Thursday 16 April 2026 09:23:19 +0000 (0:00:01.584) 0:00:46.964 ******** 2026-04-16 09:23:41.220387 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:41.220395 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:41.220403 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220411 | orchestrator | 2026-04-16 09:23:41.220434 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-04-16 09:23:41.220443 | orchestrator | Thursday 16 April 2026 09:23:21 +0000 (0:00:01.961) 0:00:48.926 ******** 2026-04-16 09:23:41.220472 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:41.220481 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:41.220489 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220497 | orchestrator | 2026-04-16 09:23:41.220505 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-04-16 09:23:41.220513 | orchestrator | Thursday 16 April 2026 09:23:24 +0000 (0:00:03.384) 0:00:52.310 ******** 2026-04-16 09:23:41.220521 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:41.220529 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:41.220537 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:23:41.220548 | orchestrator | 2026-04-16 09:23:41.220556 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-04-16 09:23:41.220571 | orchestrator | 2026-04-16 09:23:41.220579 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 09:23:41.220587 | orchestrator | Thursday 16 April 2026 09:23:38 +0000 (0:00:13.392) 0:01:05.703 ******** 2026-04-16 09:23:41.220595 | orchestrator | included: /ansible/roles/nova/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:23:41.220605 | orchestrator | 2026-04-16 09:23:41.220612 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-04-16 09:23:41.220620 | orchestrator | Thursday 16 April 2026 09:23:39 +0000 (0:00:01.827) 0:01:07.530 ******** 2026-04-16 09:23:41.220633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:41.220646 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:41.220662 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:52.531651 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:52.531834 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:23:52.531864 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:52.531887 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:52.531933 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:23:52.531966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:23:52.531987 | orchestrator | 2026-04-16 09:23:52.532008 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-04-16 09:23:52.532028 | orchestrator | Thursday 16 April 2026 09:23:43 +0000 (0:00:03.237) 0:01:10.768 ******** 2026-04-16 09:23:52.532049 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:23:52.532072 | orchestrator | 2026-04-16 09:23:52.532092 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-04-16 09:23:52.532111 | orchestrator | Thursday 16 April 2026 09:23:44 +0000 (0:00:01.115) 0:01:11.884 ******** 2026-04-16 09:23:52.532130 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:23:52.532150 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:52.532169 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:52.532189 | orchestrator | 2026-04-16 09:23:52.532208 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-04-16 09:23:52.532227 | orchestrator | Thursday 16 April 2026 09:23:45 +0000 (0:00:01.575) 0:01:13.459 ******** 2026-04-16 09:23:52.532246 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:23:52.532264 | orchestrator | 2026-04-16 09:23:52.532284 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-04-16 09:23:52.532303 | orchestrator | Thursday 16 April 2026 09:23:47 +0000 (0:00:02.069) 0:01:15.528 ******** 2026-04-16 09:23:52.532322 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:23:52.532342 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:52.532361 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:52.532380 | orchestrator | 2026-04-16 09:23:52.532398 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-04-16 09:23:52.532416 | orchestrator | Thursday 16 April 2026 09:23:49 +0000 (0:00:01.462) 0:01:16.991 ******** 2026-04-16 09:23:52.532435 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:23:52.532484 | orchestrator | 2026-04-16 09:23:52.532504 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-16 09:23:52.532523 | orchestrator | Thursday 16 April 2026 09:23:51 +0000 (0:00:01.818) 0:01:18.810 ******** 2026-04-16 09:23:52.532545 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:52.532593 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:55.772899 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:55.773002 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:55.773019 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:55.773072 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:23:55.773086 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:23:55.773099 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:23:55.773109 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:23:55.773121 | orchestrator | 2026-04-16 09:23:55.773133 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-16 09:23:55.773144 | orchestrator | Thursday 16 April 2026 09:23:55 +0000 (0:00:04.230) 0:01:23.040 ******** 2026-04-16 09:23:55.773157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:55.773183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:57.436150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:23:57.436400 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:23:57.436436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:57.436514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:57.436568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:23:57.436589 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:23:57.436634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:57.436658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:57.436679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:23:57.436709 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:23:57.436729 | orchestrator | 2026-04-16 09:23:57.436751 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-16 09:23:57.436769 | orchestrator | Thursday 16 April 2026 09:23:57 +0000 (0:00:01.657) 0:01:24.698 ******** 2026-04-16 09:23:57.436785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:23:57.436813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:00.499999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:00.500104 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:00.500124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:00.500163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:00.500191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:00.500204 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:00.500235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:00.500248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:00.500268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:00.500280 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:00.500292 | orchestrator | 2026-04-16 09:24:00.500304 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-04-16 09:24:00.500316 | orchestrator | Thursday 16 April 2026 09:23:58 +0000 (0:00:01.956) 0:01:26.654 ******** 2026-04-16 09:24:00.500334 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:00.500354 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:06.438878 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:06.439034 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:06.439068 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:06.439101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:06.439116 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:06.439135 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:06.439146 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:06.439159 | orchestrator | 2026-04-16 09:24:06.439178 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-04-16 09:24:06.439196 | orchestrator | Thursday 16 April 2026 09:24:03 +0000 (0:00:04.447) 0:01:31.102 ******** 2026-04-16 09:24:06.439220 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:06.439249 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:12.771733 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:12.771864 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:12.771910 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:12.771943 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:12.771965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:12.771978 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:12.771989 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:12.772000 | orchestrator | 2026-04-16 09:24:12.772012 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-04-16 09:24:12.772023 | orchestrator | Thursday 16 April 2026 09:24:12 +0000 (0:00:08.916) 0:01:40.018 ******** 2026-04-16 09:24:12.772039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:12.772058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:23.900498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:23.900653 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:23.900680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:23.900713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:23.900727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:23.900739 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:23.900774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:23.900813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:23.900826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:23.900838 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:23.900850 | orchestrator | 2026-04-16 09:24:23.900862 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-04-16 09:24:23.900874 | orchestrator | Thursday 16 April 2026 09:24:14 +0000 (0:00:01.814) 0:01:41.833 ******** 2026-04-16 09:24:23.900885 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:23.900896 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:23.900907 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:23.900918 | orchestrator | 2026-04-16 09:24:23.900929 | orchestrator | TASK [nova : Copying over nova-metadata-wsgi.conf] ***************************** 2026-04-16 09:24:23.900945 | orchestrator | Thursday 16 April 2026 09:24:16 +0000 (0:00:01.853) 0:01:43.687 ******** 2026-04-16 09:24:23.900957 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:23.900968 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:23.900981 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:23.900993 | orchestrator | 2026-04-16 09:24:23.901005 | orchestrator | TASK [nova : Copying over vendordata file for nova services] ******************* 2026-04-16 09:24:23.901018 | orchestrator | Thursday 16 April 2026 09:24:17 +0000 (0:00:01.577) 0:01:45.265 ******** 2026-04-16 09:24:23.901031 | orchestrator | skipping: [testbed-node-0] => (item=nova-metadata)  2026-04-16 09:24:23.901044 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-04-16 09:24:23.901065 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:23.901079 | orchestrator | skipping: [testbed-node-1] => (item=nova-metadata)  2026-04-16 09:24:23.901091 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-04-16 09:24:23.901103 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:23.901115 | orchestrator | skipping: [testbed-node-2] => (item=nova-metadata)  2026-04-16 09:24:23.901128 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-04-16 09:24:23.901140 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:23.901153 | orchestrator | 2026-04-16 09:24:23.901165 | orchestrator | TASK [Configure uWSGI for Nova] ************************************************ 2026-04-16 09:24:23.901178 | orchestrator | Thursday 16 April 2026 09:24:18 +0000 (0:00:01.367) 0:01:46.632 ******** 2026-04-16 09:24:23.901191 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-api', 'port': '8774', 'workers': '2'}) 2026-04-16 09:24:23.901206 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova-metadata', 'port': '8775', 'workers': '2'}) 2026-04-16 09:24:23.901219 | orchestrator | 2026-04-16 09:24:23.901232 | orchestrator | TASK [service-uwsgi-config : Copying over nova-api uWSGI config] *************** 2026-04-16 09:24:23.901245 | orchestrator | Thursday 16 April 2026 09:24:21 +0000 (0:00:02.780) 0:01:49.413 ******** 2026-04-16 09:24:23.901257 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:24:23.901270 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:24:23.901281 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:24:23.901292 | orchestrator | 2026-04-16 09:24:50.669392 | orchestrator | TASK [service-uwsgi-config : Copying over nova-metadata uWSGI config] ********** 2026-04-16 09:24:50.669562 | orchestrator | Thursday 16 April 2026 09:24:24 +0000 (0:00:02.896) 0:01:52.309 ******** 2026-04-16 09:24:50.669577 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:24:50.669587 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:24:50.669596 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:24:50.669605 | orchestrator | 2026-04-16 09:24:50.669614 | orchestrator | TASK [nova : Run Nova upgrade checks] ****************************************** 2026-04-16 09:24:50.669623 | orchestrator | Thursday 16 April 2026 09:24:28 +0000 (0:00:03.578) 0:01:55.888 ******** 2026-04-16 09:24:50.669632 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:24:50.669641 | orchestrator | 2026-04-16 09:24:50.669650 | orchestrator | TASK [nova : Upgrade status check result] ************************************** 2026-04-16 09:24:50.669659 | orchestrator | Thursday 16 April 2026 09:24:47 +0000 (0:00:19.743) 0:02:15.632 ******** 2026-04-16 09:24:50.669668 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:50.669676 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:50.669685 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:50.669693 | orchestrator | 2026-04-16 09:24:50.669702 | orchestrator | TASK [nova : Stopping top level nova services] ********************************* 2026-04-16 09:24:50.669711 | orchestrator | Thursday 16 April 2026 09:24:49 +0000 (0:00:01.463) 0:02:17.096 ******** 2026-04-16 09:24:50.669724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:50.669797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:50.669810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:50.669819 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:50.669847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:50.669858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:50.669874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:50.669883 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:50.669897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:50.669915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:56.073565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:56.073717 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:24:56.073750 | orchestrator | 2026-04-16 09:24:56.073773 | orchestrator | TASK [service-check-containers : nova | Check containers] ********************** 2026-04-16 09:24:56.073794 | orchestrator | Thursday 16 April 2026 09:24:52 +0000 (0:00:02.753) 0:02:19.850 ******** 2026-04-16 09:24:56.073818 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:56.073893 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:56.073911 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:56.073945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:56.073970 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:56.073988 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:24:56.074001 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:56.074087 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:59.664716 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:24:59.664847 | orchestrator | 2026-04-16 09:24:59.664865 | orchestrator | TASK [service-check-containers : nova | Notify handlers to restart containers] *** 2026-04-16 09:24:59.664878 | orchestrator | Thursday 16 April 2026 09:24:57 +0000 (0:00:05.137) 0:02:24.988 ******** 2026-04-16 09:24:59.664890 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 09:24:59.664903 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:24:59.664914 | orchestrator | } 2026-04-16 09:24:59.664925 | orchestrator | ok: [testbed-node-1] => { 2026-04-16 09:24:59.664936 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:24:59.664947 | orchestrator | } 2026-04-16 09:24:59.664958 | orchestrator | ok: [testbed-node-2] => { 2026-04-16 09:24:59.664969 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:24:59.664980 | orchestrator | } 2026-04-16 09:24:59.664992 | orchestrator | 2026-04-16 09:24:59.665003 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:24:59.665015 | orchestrator | Thursday 16 April 2026 09:24:58 +0000 (0:00:01.351) 0:02:26.340 ******** 2026-04-16 09:24:59.665044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:59.665060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:59.665074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:59.665086 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:24:59.665119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:59.665147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:59.665160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:24:59.665172 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:24:59.665185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:24:59.665206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:25:40.537389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:25:40.537550 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:40.537565 | orchestrator | 2026-04-16 09:25:40.537576 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 09:25:40.537586 | orchestrator | Thursday 16 April 2026 09:25:00 +0000 (0:00:02.213) 0:02:28.553 ******** 2026-04-16 09:25:40.537595 | orchestrator | 2026-04-16 09:25:40.537604 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 09:25:40.537613 | orchestrator | Thursday 16 April 2026 09:25:01 +0000 (0:00:00.531) 0:02:29.085 ******** 2026-04-16 09:25:40.537622 | orchestrator | 2026-04-16 09:25:40.537631 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-04-16 09:25:40.537654 | orchestrator | Thursday 16 April 2026 09:25:01 +0000 (0:00:00.507) 0:02:29.592 ******** 2026-04-16 09:25:40.537663 | orchestrator | 2026-04-16 09:25:40.537672 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-04-16 09:25:40.537681 | orchestrator | 2026-04-16 09:25:40.537690 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:25:40.537699 | orchestrator | Thursday 16 April 2026 09:25:03 +0000 (0:00:01.497) 0:02:31.089 ******** 2026-04-16 09:25:40.537708 | orchestrator | included: /ansible/roles/nova-cell/tasks/upgrade.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:25:40.537719 | orchestrator | 2026-04-16 09:25:40.537728 | orchestrator | TASK [nova-cell : Get new Libvirt version] ************************************* 2026-04-16 09:25:40.537736 | orchestrator | Thursday 16 April 2026 09:25:06 +0000 (0:00:02.614) 0:02:33.704 ******** 2026-04-16 09:25:40.537745 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:25:40.537754 | orchestrator | 2026-04-16 09:25:40.537763 | orchestrator | TASK [nova-cell : Cache new Libvirt version] *********************************** 2026-04-16 09:25:40.537771 | orchestrator | Thursday 16 April 2026 09:25:10 +0000 (0:00:04.392) 0:02:38.097 ******** 2026-04-16 09:25:40.537780 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:25:40.537790 | orchestrator | 2026-04-16 09:25:40.537799 | orchestrator | TASK [Get nova_libvirt image info] ********************************************* 2026-04-16 09:25:40.537807 | orchestrator | Thursday 16 April 2026 09:25:12 +0000 (0:00:02.252) 0:02:40.349 ******** 2026-04-16 09:25:40.537816 | orchestrator | included: service-image-info for testbed-node-3 2026-04-16 09:25:40.537825 | orchestrator | 2026-04-16 09:25:40.537834 | orchestrator | TASK [service-image-info : community.docker.docker_image_info] ***************** 2026-04-16 09:25:40.537864 | orchestrator | Thursday 16 April 2026 09:25:14 +0000 (0:00:02.035) 0:02:42.385 ******** 2026-04-16 09:25:40.537873 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:25:40.537882 | orchestrator | 2026-04-16 09:25:40.537890 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-16 09:25:40.537899 | orchestrator | Thursday 16 April 2026 09:25:19 +0000 (0:00:04.352) 0:02:46.738 ******** 2026-04-16 09:25:40.537907 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:25:40.537916 | orchestrator | 2026-04-16 09:25:40.537924 | orchestrator | TASK [service-image-info : containers.podman.podman_image_info] **************** 2026-04-16 09:25:40.537934 | orchestrator | Thursday 16 April 2026 09:25:22 +0000 (0:00:03.082) 0:02:49.820 ******** 2026-04-16 09:25:40.537944 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:25:40.537955 | orchestrator | 2026-04-16 09:25:40.537965 | orchestrator | TASK [service-image-info : set_fact] ******************************************* 2026-04-16 09:25:40.537975 | orchestrator | Thursday 16 April 2026 09:25:25 +0000 (0:00:02.858) 0:02:52.679 ******** 2026-04-16 09:25:40.537988 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:25:40.538004 | orchestrator | 2026-04-16 09:25:40.538089 | orchestrator | TASK [nova-cell : Get container facts] ***************************************** 2026-04-16 09:25:40.538106 | orchestrator | Thursday 16 April 2026 09:25:27 +0000 (0:00:02.928) 0:02:55.608 ******** 2026-04-16 09:25:40.538121 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:40.538137 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:40.538152 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:40.538167 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:25:40.538181 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:25:40.538197 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:25:40.538213 | orchestrator | 2026-04-16 09:25:40.538228 | orchestrator | TASK [nova-cell : Get current Libvirt version] ********************************* 2026-04-16 09:25:40.538243 | orchestrator | Thursday 16 April 2026 09:25:32 +0000 (0:00:05.029) 0:03:00.637 ******** 2026-04-16 09:25:40.538257 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:40.538268 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:40.538278 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:40.538288 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:25:40.538298 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:25:40.538306 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:25:40.538315 | orchestrator | 2026-04-16 09:25:40.538324 | orchestrator | TASK [nova-cell : Check that the new Libvirt version is >= current] ************ 2026-04-16 09:25:40.538334 | orchestrator | Thursday 16 April 2026 09:25:36 +0000 (0:00:03.948) 0:03:04.585 ******** 2026-04-16 09:25:40.538349 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:25:40.538362 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:25:40.538378 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:25:40.538393 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:40.538408 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:40.538480 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:40.538498 | orchestrator | 2026-04-16 09:25:40.538513 | orchestrator | TASK [nova-cell : Stopping nova cell services] ********************************* 2026-04-16 09:25:40.538527 | orchestrator | Thursday 16 April 2026 09:25:39 +0000 (0:00:02.713) 0:03:07.299 ******** 2026-04-16 09:25:40.538554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:25:40.538587 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:25:40.538604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:25:40.538624 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:25:40.538647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:25:40.538661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:25:40.538687 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:25:50.323081 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:25:50.323231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:25:50.323275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:25:50.323289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:25:50.323302 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:25:50.323315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:25:50.323328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:25:50.323341 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:50.323372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:25:50.323414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:25:50.323466 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:50.323479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:25:50.323491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:25:50.323502 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:50.323513 | orchestrator | 2026-04-16 09:25:50.323525 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-04-16 09:25:50.323538 | orchestrator | Thursday 16 April 2026 09:25:42 +0000 (0:00:02.733) 0:03:10.032 ******** 2026-04-16 09:25:50.323549 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:50.323560 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:50.323571 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:50.323584 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:25:50.323598 | orchestrator | 2026-04-16 09:25:50.323611 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-04-16 09:25:50.323624 | orchestrator | Thursday 16 April 2026 09:25:44 +0000 (0:00:02.065) 0:03:12.098 ******** 2026-04-16 09:25:50.323637 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-16 09:25:50.323651 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-16 09:25:50.323663 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-16 09:25:50.323676 | orchestrator | 2026-04-16 09:25:50.323688 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-04-16 09:25:50.323700 | orchestrator | Thursday 16 April 2026 09:25:46 +0000 (0:00:01.787) 0:03:13.886 ******** 2026-04-16 09:25:50.323712 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-04-16 09:25:50.323726 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-04-16 09:25:50.323738 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-04-16 09:25:50.323760 | orchestrator | 2026-04-16 09:25:50.323774 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-04-16 09:25:50.323786 | orchestrator | Thursday 16 April 2026 09:25:48 +0000 (0:00:02.170) 0:03:16.056 ******** 2026-04-16 09:25:50.323798 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-04-16 09:25:50.323811 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:25:50.323823 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-04-16 09:25:50.323835 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:25:50.323847 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-04-16 09:25:50.323859 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:25:50.323872 | orchestrator | 2026-04-16 09:25:50.323884 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-04-16 09:25:50.323896 | orchestrator | Thursday 16 April 2026 09:25:49 +0000 (0:00:01.336) 0:03:17.393 ******** 2026-04-16 09:25:50.323909 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 09:25:50.323922 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 09:25:50.323935 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 09:25:50.323955 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 09:25:58.799346 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:58.799520 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-04-16 09:25:58.799547 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 09:25:58.799565 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 09:25:58.799598 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:58.799609 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-04-16 09:25:58.799619 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-04-16 09:25:58.799629 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:58.799639 | orchestrator | ok: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 09:25:58.799648 | orchestrator | ok: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 09:25:58.799658 | orchestrator | ok: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-04-16 09:25:58.799667 | orchestrator | 2026-04-16 09:25:58.799678 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-04-16 09:25:58.799687 | orchestrator | Thursday 16 April 2026 09:25:51 +0000 (0:00:02.262) 0:03:19.655 ******** 2026-04-16 09:25:58.799697 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:58.799707 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:58.799716 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:58.799726 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:25:58.799736 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:25:58.799746 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:25:58.799755 | orchestrator | 2026-04-16 09:25:58.799765 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-04-16 09:25:58.799774 | orchestrator | Thursday 16 April 2026 09:25:54 +0000 (0:00:02.198) 0:03:21.853 ******** 2026-04-16 09:25:58.799784 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:25:58.799793 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:25:58.799803 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:25:58.799812 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:25:58.799822 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:25:58.799831 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:25:58.799841 | orchestrator | 2026-04-16 09:25:58.799850 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-04-16 09:25:58.799860 | orchestrator | Thursday 16 April 2026 09:25:56 +0000 (0:00:02.519) 0:03:24.373 ******** 2026-04-16 09:25:58.799873 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:25:58.799907 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:25:58.799942 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:25:58.799960 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:25:58.799973 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:25:58.799985 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:25:58.800004 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:25:58.800017 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:25:58.800030 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:25:58.800053 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713320 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713438 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713474 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713485 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713494 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713503 | orchestrator | 2026-04-16 09:26:04.713515 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:26:04.713525 | orchestrator | Thursday 16 April 2026 09:26:00 +0000 (0:00:03.624) 0:03:27.998 ******** 2026-04-16 09:26:04.713534 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:26:04.713543 | orchestrator | 2026-04-16 09:26:04.713564 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-04-16 09:26:04.713572 | orchestrator | Thursday 16 April 2026 09:26:02 +0000 (0:00:02.185) 0:03:30.183 ******** 2026-04-16 09:26:04.713596 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713613 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713622 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713631 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:26:04.713659 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922415 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922597 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922619 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922646 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922677 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922711 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922780 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922795 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:26:07.922807 | orchestrator | 2026-04-16 09:26:07.922821 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-04-16 09:26:07.922833 | orchestrator | Thursday 16 April 2026 09:26:07 +0000 (0:00:04.522) 0:03:34.706 ******** 2026-04-16 09:26:07.922849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:26:07.922869 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:26:07.922894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:26:09.007897 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:26:09.008003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:26:09.008025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:26:09.008039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:26:09.008052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:26:09.008081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:26:09.008153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:26:09.008168 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:26:09.008180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:26:09.008193 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:26:09.008205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:26:09.008217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:26:09.008228 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:26:09.008240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:26:09.008266 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:26:09.008278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:26:09.008298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:26:12.005599 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:26:12.005682 | orchestrator | 2026-04-16 09:26:12.005692 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-04-16 09:26:12.005700 | orchestrator | Thursday 16 April 2026 09:26:10 +0000 (0:00:03.061) 0:03:37.767 ******** 2026-04-16 09:26:12.005709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:26:12.005719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:26:12.005727 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:26:12.005762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:26:12.005770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:26:12.005778 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:26:12.005799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:26:12.005806 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:26:12.005813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:26:12.005820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:26:12.005836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:26:12.005842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:26:12.005854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:26:46.118586 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:26:46.118733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:26:46.118762 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:26:46.118784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:26:46.118804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:26:46.118856 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:26:46.118894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:26:46.118914 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:26:46.118934 | orchestrator | 2026-04-16 09:26:46.118953 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-04-16 09:26:46.118971 | orchestrator | Thursday 16 April 2026 09:26:13 +0000 (0:00:03.505) 0:03:41.273 ******** 2026-04-16 09:26:46.118987 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:26:46.119003 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:26:46.119019 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:26:46.119035 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:26:46.119052 | orchestrator | 2026-04-16 09:26:46.119068 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-04-16 09:26:46.119084 | orchestrator | Thursday 16 April 2026 09:26:15 +0000 (0:00:02.350) 0:03:43.624 ******** 2026-04-16 09:26:46.119100 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:26:46.119116 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:26:46.119131 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:26:46.119146 | orchestrator | 2026-04-16 09:26:46.119162 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-04-16 09:26:46.119178 | orchestrator | Thursday 16 April 2026 09:26:17 +0000 (0:00:01.966) 0:03:45.591 ******** 2026-04-16 09:26:46.119194 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:26:46.119210 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:26:46.119225 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:26:46.119240 | orchestrator | 2026-04-16 09:26:46.119256 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-04-16 09:26:46.119271 | orchestrator | Thursday 16 April 2026 09:26:19 +0000 (0:00:02.074) 0:03:47.665 ******** 2026-04-16 09:26:46.119286 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:26:46.119302 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:26:46.119316 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:26:46.119331 | orchestrator | 2026-04-16 09:26:46.119370 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-04-16 09:26:46.119388 | orchestrator | Thursday 16 April 2026 09:26:21 +0000 (0:00:01.923) 0:03:49.589 ******** 2026-04-16 09:26:46.119404 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:26:46.119450 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:26:46.119466 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:26:46.119481 | orchestrator | 2026-04-16 09:26:46.119497 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-04-16 09:26:46.119513 | orchestrator | Thursday 16 April 2026 09:26:23 +0000 (0:00:01.527) 0:03:51.117 ******** 2026-04-16 09:26:46.119529 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:26:46.119546 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:26:46.119562 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:26:46.119592 | orchestrator | 2026-04-16 09:26:46.119607 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-04-16 09:26:46.119623 | orchestrator | Thursday 16 April 2026 09:26:25 +0000 (0:00:02.204) 0:03:53.322 ******** 2026-04-16 09:26:46.119639 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:26:46.119655 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:26:46.119670 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:26:46.119686 | orchestrator | 2026-04-16 09:26:46.119701 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-04-16 09:26:46.119716 | orchestrator | Thursday 16 April 2026 09:26:28 +0000 (0:00:02.399) 0:03:55.721 ******** 2026-04-16 09:26:46.119731 | orchestrator | ok: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:26:46.119746 | orchestrator | ok: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:26:46.119762 | orchestrator | ok: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:26:46.119777 | orchestrator | ok: [testbed-node-3] => (item=nova-libvirt) 2026-04-16 09:26:46.119793 | orchestrator | ok: [testbed-node-4] => (item=nova-libvirt) 2026-04-16 09:26:46.119808 | orchestrator | ok: [testbed-node-5] => (item=nova-libvirt) 2026-04-16 09:26:46.119823 | orchestrator | 2026-04-16 09:26:46.119838 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-04-16 09:26:46.119854 | orchestrator | Thursday 16 April 2026 09:26:32 +0000 (0:00:04.836) 0:04:00.558 ******** 2026-04-16 09:26:46.119870 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:26:46.119885 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:26:46.119900 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:26:46.119915 | orchestrator | 2026-04-16 09:26:46.119931 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-04-16 09:26:46.119946 | orchestrator | Thursday 16 April 2026 09:26:34 +0000 (0:00:01.364) 0:04:01.922 ******** 2026-04-16 09:26:46.119962 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:26:46.119978 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:26:46.119993 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:26:46.120008 | orchestrator | 2026-04-16 09:26:46.120094 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-04-16 09:26:46.120111 | orchestrator | Thursday 16 April 2026 09:26:35 +0000 (0:00:01.357) 0:04:03.280 ******** 2026-04-16 09:26:46.120128 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:26:46.120145 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:26:46.120160 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:26:46.120176 | orchestrator | 2026-04-16 09:26:46.120191 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-04-16 09:26:46.120206 | orchestrator | Thursday 16 April 2026 09:26:38 +0000 (0:00:02.648) 0:04:05.928 ******** 2026-04-16 09:26:46.120223 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-16 09:26:46.120252 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-16 09:26:46.120270 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'ceph-ephemeral-nova', 'desc': 'Ceph Client Secret for Ephemeral Storage (Nova)', 'enabled': True}) 2026-04-16 09:26:46.120287 | orchestrator | ok: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-16 09:26:46.120304 | orchestrator | ok: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-16 09:26:46.120320 | orchestrator | ok: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'ceph-persistent-cinder', 'desc': 'Ceph Client Secret for Persistent Storage (Cinder)', 'enabled': 'yes'}) 2026-04-16 09:26:46.120350 | orchestrator | 2026-04-16 09:26:46.120364 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-04-16 09:26:46.120374 | orchestrator | Thursday 16 April 2026 09:26:42 +0000 (0:00:04.344) 0:04:10.273 ******** 2026-04-16 09:26:46.120384 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 09:26:46.120394 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 09:26:46.120404 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 09:26:46.120436 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-04-16 09:26:46.120447 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:26:46.120457 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-04-16 09:26:46.120466 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:26:46.120476 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-04-16 09:26:46.120486 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:26:46.120495 | orchestrator | 2026-04-16 09:26:46.120518 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-04-16 09:27:03.238173 | orchestrator | Thursday 16 April 2026 09:26:47 +0000 (0:00:04.497) 0:04:14.770 ******** 2026-04-16 09:27:03.238270 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:03.238282 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:03.238290 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:03.238298 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:27:03.238305 | orchestrator | 2026-04-16 09:27:03.238313 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-04-16 09:27:03.238320 | orchestrator | Thursday 16 April 2026 09:26:50 +0000 (0:00:03.140) 0:04:17.911 ******** 2026-04-16 09:27:03.238327 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:27:03.238334 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:27:03.238341 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:27:03.238347 | orchestrator | 2026-04-16 09:27:03.238355 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-04-16 09:27:03.238361 | orchestrator | Thursday 16 April 2026 09:26:52 +0000 (0:00:02.133) 0:04:20.044 ******** 2026-04-16 09:27:03.238368 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:03.238375 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:03.238382 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:03.238388 | orchestrator | 2026-04-16 09:27:03.238395 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-04-16 09:27:03.238402 | orchestrator | Thursday 16 April 2026 09:26:53 +0000 (0:00:01.492) 0:04:21.536 ******** 2026-04-16 09:27:03.238433 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:03.238445 | orchestrator | 2026-04-16 09:27:03.238452 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-04-16 09:27:03.238470 | orchestrator | Thursday 16 April 2026 09:26:55 +0000 (0:00:01.137) 0:04:22.673 ******** 2026-04-16 09:27:03.238477 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:03.238491 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:03.238498 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:03.238505 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:03.238512 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:03.238519 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:03.238525 | orchestrator | 2026-04-16 09:27:03.238532 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-04-16 09:27:03.238539 | orchestrator | Thursday 16 April 2026 09:26:56 +0000 (0:00:01.751) 0:04:24.425 ******** 2026-04-16 09:27:03.238546 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:27:03.238553 | orchestrator | 2026-04-16 09:27:03.238559 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-04-16 09:27:03.238566 | orchestrator | Thursday 16 April 2026 09:26:58 +0000 (0:00:01.788) 0:04:26.213 ******** 2026-04-16 09:27:03.238592 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:03.238599 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:03.238606 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:03.238613 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:03.238619 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:03.238626 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:03.238633 | orchestrator | 2026-04-16 09:27:03.238640 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-04-16 09:27:03.238647 | orchestrator | Thursday 16 April 2026 09:27:00 +0000 (0:00:02.091) 0:04:28.305 ******** 2026-04-16 09:27:03.238669 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238680 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238702 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238711 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238726 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238739 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238748 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238758 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:27:03.238772 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665188 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665227 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665254 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665268 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665295 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:06.665307 | orchestrator | 2026-04-16 09:27:06.665319 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-04-16 09:27:06.665331 | orchestrator | Thursday 16 April 2026 09:27:05 +0000 (0:00:04.611) 0:04:32.917 ******** 2026-04-16 09:27:06.665342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:06.665361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:06.665376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:06.665387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:06.665485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:17.772963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:17.773196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773225 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773260 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773277 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773316 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773333 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773366 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773384 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:17.773454 | orchestrator | 2026-04-16 09:27:17.773474 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-04-16 09:27:17.773492 | orchestrator | Thursday 16 April 2026 09:27:13 +0000 (0:00:07.904) 0:04:40.822 ******** 2026-04-16 09:27:17.773508 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:17.773524 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:17.773539 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:17.773554 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:17.773569 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:17.773583 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:17.773597 | orchestrator | 2026-04-16 09:27:17.773613 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-04-16 09:27:17.773627 | orchestrator | Thursday 16 April 2026 09:27:15 +0000 (0:00:02.831) 0:04:43.654 ******** 2026-04-16 09:27:17.773642 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 09:27:17.773658 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 09:27:17.773673 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-04-16 09:27:17.773688 | orchestrator | ok: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 09:27:17.773704 | orchestrator | ok: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 09:27:17.773720 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 09:27:17.773750 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:17.773765 | orchestrator | ok: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-04-16 09:27:17.773781 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 09:27:17.773795 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:17.773822 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-04-16 09:27:47.109462 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.109587 | orchestrator | ok: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 09:27:47.109606 | orchestrator | ok: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 09:27:47.109616 | orchestrator | ok: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-04-16 09:27:47.109628 | orchestrator | 2026-04-16 09:27:47.109640 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-04-16 09:27:47.109650 | orchestrator | Thursday 16 April 2026 09:27:20 +0000 (0:00:04.498) 0:04:48.153 ******** 2026-04-16 09:27:47.109661 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:47.109672 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:47.109681 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:47.109691 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.109701 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.109711 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.109720 | orchestrator | 2026-04-16 09:27:47.109731 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-04-16 09:27:47.109741 | orchestrator | Thursday 16 April 2026 09:27:22 +0000 (0:00:01.755) 0:04:49.908 ******** 2026-04-16 09:27:47.109751 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 09:27:47.109762 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 09:27:47.109772 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-04-16 09:27:47.109783 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 09:27:47.109794 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 09:27:47.109804 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-04-16 09:27:47.109814 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 09:27:47.109824 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 09:27:47.109834 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 09:27:47.109844 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.109873 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-04-16 09:27:47.109883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 09:27:47.109893 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.109903 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-04-16 09:27:47.109913 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.109923 | orchestrator | ok: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:27:47.109933 | orchestrator | ok: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:27:47.109968 | orchestrator | ok: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:27:47.109979 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:27:47.109990 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:27:47.110000 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-04-16 09:27:47.110011 | orchestrator | 2026-04-16 09:27:47.110084 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-04-16 09:27:47.110095 | orchestrator | Thursday 16 April 2026 09:27:28 +0000 (0:00:06.477) 0:04:56.386 ******** 2026-04-16 09:27:47.110105 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 09:27:47.110115 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 09:27:47.110134 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-04-16 09:27:47.110144 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 09:27:47.110154 | orchestrator | ok: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:27:47.110164 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 09:27:47.110174 | orchestrator | ok: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:27:47.110184 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-04-16 09:27:47.110193 | orchestrator | ok: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-04-16 09:27:47.110225 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 09:27:47.110237 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 09:27:47.110247 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-04-16 09:27:47.110256 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 09:27:47.110266 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.110276 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 09:27:47.110285 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.110295 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-04-16 09:27:47.110306 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.110316 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:27:47.110326 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:27:47.110337 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-04-16 09:27:47.110347 | orchestrator | ok: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:27:47.110357 | orchestrator | ok: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:27:47.110367 | orchestrator | ok: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-04-16 09:27:47.110376 | orchestrator | ok: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:27:47.110386 | orchestrator | ok: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:27:47.110420 | orchestrator | ok: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-04-16 09:27:47.110432 | orchestrator | 2026-04-16 09:27:47.110443 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-04-16 09:27:47.110454 | orchestrator | Thursday 16 April 2026 09:27:36 +0000 (0:00:07.814) 0:05:04.201 ******** 2026-04-16 09:27:47.110479 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:47.110491 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:47.110501 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:47.110511 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.110522 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.110532 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.110541 | orchestrator | 2026-04-16 09:27:47.110551 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-04-16 09:27:47.110561 | orchestrator | Thursday 16 April 2026 09:27:38 +0000 (0:00:01.687) 0:05:05.888 ******** 2026-04-16 09:27:47.110571 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:47.110580 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:47.110599 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:47.110610 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.110620 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.110631 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.110641 | orchestrator | 2026-04-16 09:27:47.110651 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-04-16 09:27:47.110662 | orchestrator | Thursday 16 April 2026 09:27:40 +0000 (0:00:01.937) 0:05:07.826 ******** 2026-04-16 09:27:47.110672 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.110682 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.110692 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:27:47.110703 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:27:47.110713 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.110723 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:27:47.110733 | orchestrator | 2026-04-16 09:27:47.110743 | orchestrator | TASK [nova-cell : Generating 'hostid' file for nova_compute] ******************* 2026-04-16 09:27:47.110753 | orchestrator | Thursday 16 April 2026 09:27:43 +0000 (0:00:02.895) 0:05:10.721 ******** 2026-04-16 09:27:47.110763 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:47.110773 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:47.110784 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:27:47.110794 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:27:47.110805 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:47.110813 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:27:47.110819 | orchestrator | 2026-04-16 09:27:47.110825 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-04-16 09:27:47.110832 | orchestrator | Thursday 16 April 2026 09:27:46 +0000 (0:00:03.370) 0:05:14.091 ******** 2026-04-16 09:27:47.110842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:47.110864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:48.236821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:27:48.236949 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:48.236991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:48.237014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:48.237034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:27:48.237054 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:48.237073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:48.237147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:48.237170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:27:48.237190 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:48.237219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:27:48.237240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:27:48.237255 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:48.237267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:27:48.237278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:27:48.237299 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:48.237319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:27:53.617861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:27:53.617968 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:53.617984 | orchestrator | 2026-04-16 09:27:53.617996 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-04-16 09:27:53.618007 | orchestrator | Thursday 16 April 2026 09:27:49 +0000 (0:00:02.821) 0:05:16.913 ******** 2026-04-16 09:27:53.618068 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-04-16 09:27:53.618079 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-16 09:27:53.618089 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:53.618114 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-04-16 09:27:53.618124 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-16 09:27:53.618134 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:27:53.618144 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-04-16 09:27:53.618154 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-16 09:27:53.618163 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:27:53.618173 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-16 09:27:53.618183 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-16 09:27:53.618194 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:27:53.618204 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-16 09:27:53.618213 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-16 09:27:53.618223 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:27:53.618232 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-16 09:27:53.618242 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-16 09:27:53.618252 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:27:53.618262 | orchestrator | 2026-04-16 09:27:53.618272 | orchestrator | TASK [service-check-containers : nova_cell | Check containers] ***************** 2026-04-16 09:27:53.618282 | orchestrator | Thursday 16 April 2026 09:27:51 +0000 (0:00:01.793) 0:05:18.706 ******** 2026-04-16 09:27:53.618294 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618326 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618356 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618372 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618385 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618441 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618464 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618476 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:27:53.618494 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-04-16 09:27:58.584947 | orchestrator | ok: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:58.585078 | orchestrator | ok: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:58.585097 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:58.585137 | orchestrator | ok: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:58.585150 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:58.585182 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 09:27:58.585196 | orchestrator | 2026-04-16 09:27:58.585209 | orchestrator | TASK [service-check-containers : nova_cell | Notify handlers to restart containers] *** 2026-04-16 09:27:58.585222 | orchestrator | Thursday 16 April 2026 09:27:55 +0000 (0:00:04.823) 0:05:23.530 ******** 2026-04-16 09:27:58.585235 | orchestrator | ok: [testbed-node-3] => { 2026-04-16 09:27:58.585248 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:27:58.585260 | orchestrator | } 2026-04-16 09:27:58.585272 | orchestrator | ok: [testbed-node-4] => { 2026-04-16 09:27:58.585283 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:27:58.585294 | orchestrator | } 2026-04-16 09:27:58.585306 | orchestrator | ok: [testbed-node-5] => { 2026-04-16 09:27:58.585317 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:27:58.585328 | orchestrator | } 2026-04-16 09:27:58.585340 | orchestrator | ok: [testbed-node-0] => { 2026-04-16 09:27:58.585351 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:27:58.585362 | orchestrator | } 2026-04-16 09:27:58.585374 | orchestrator | ok: [testbed-node-1] => { 2026-04-16 09:27:58.585385 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:27:58.585432 | orchestrator | } 2026-04-16 09:27:58.585451 | orchestrator | ok: [testbed-node-2] => { 2026-04-16 09:27:58.585477 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:27:58.585495 | orchestrator | } 2026-04-16 09:27:58.585516 | orchestrator | 2026-04-16 09:27:58.585535 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:27:58.585569 | orchestrator | Thursday 16 April 2026 09:27:57 +0000 (0:00:01.652) 0:05:25.183 ******** 2026-04-16 09:27:58.585591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:27:58.585608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:27:58.585622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:27:58.585636 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:27:58.585660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:28:02.387520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:28:02.387645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:28:02.387655 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:28:02.387663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-04-16 09:28:02.387670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-04-16 09:28:02.387676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-04-16 09:28:02.387683 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:28:02.387703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:28:02.387714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:28:02.387727 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:28:02.387733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:28:02.387739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:28:02.387745 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:28:02.387751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-04-16 09:28:02.387757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-04-16 09:28:02.387764 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:28:02.387770 | orchestrator | 2026-04-16 09:28:02.387777 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:28:02.387785 | orchestrator | Thursday 16 April 2026 09:28:01 +0000 (0:00:03.512) 0:05:28.695 ******** 2026-04-16 09:28:02.387791 | orchestrator | 2026-04-16 09:28:02.387797 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:28:02.387802 | orchestrator | Thursday 16 April 2026 09:28:01 +0000 (0:00:00.684) 0:05:29.379 ******** 2026-04-16 09:28:02.387808 | orchestrator | 2026-04-16 09:28:02.387813 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:28:02.387824 | orchestrator | Thursday 16 April 2026 09:28:02 +0000 (0:00:00.516) 0:05:29.895 ******** 2026-04-16 09:28:02.387829 | orchestrator | 2026-04-16 09:28:02.387839 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:29:32.659706 | orchestrator | Thursday 16 April 2026 09:28:02 +0000 (0:00:00.518) 0:05:30.414 ******** 2026-04-16 09:29:32.659824 | orchestrator | 2026-04-16 09:29:32.659840 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:29:32.659853 | orchestrator | Thursday 16 April 2026 09:28:03 +0000 (0:00:00.511) 0:05:30.925 ******** 2026-04-16 09:29:32.659870 | orchestrator | 2026-04-16 09:29:32.659890 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-04-16 09:29:32.659946 | orchestrator | Thursday 16 April 2026 09:28:03 +0000 (0:00:00.521) 0:05:31.446 ******** 2026-04-16 09:29:32.659966 | orchestrator | 2026-04-16 09:29:32.659984 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-04-16 09:29:32.660003 | orchestrator | 2026-04-16 09:29:32.660021 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-04-16 09:29:32.660060 | orchestrator | Thursday 16 April 2026 09:28:05 +0000 (0:00:02.039) 0:05:33.486 ******** 2026-04-16 09:29:32.660081 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:32.660102 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:32.660123 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:32.660143 | orchestrator | 2026-04-16 09:29:32.660155 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-04-16 09:29:32.660166 | orchestrator | 2026-04-16 09:29:32.660177 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-04-16 09:29:32.660187 | orchestrator | Thursday 16 April 2026 09:28:07 +0000 (0:00:01.890) 0:05:35.376 ******** 2026-04-16 09:29:32.660198 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:32.660209 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:32.660221 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:32.660234 | orchestrator | 2026-04-16 09:29:32.660247 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-04-16 09:29:32.660259 | orchestrator | 2026-04-16 09:29:32.660272 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-04-16 09:29:32.660284 | orchestrator | Thursday 16 April 2026 09:28:09 +0000 (0:00:02.187) 0:05:37.564 ******** 2026-04-16 09:29:32.660297 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-04-16 09:29:32.660310 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-04-16 09:29:32.660323 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-04-16 09:29:32.660336 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-04-16 09:29:32.660348 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-04-16 09:29:32.660361 | orchestrator | changed: [testbed-node-1] => (item=nova-conductor) 2026-04-16 09:29:32.660371 | orchestrator | changed: [testbed-node-2] => (item=nova-conductor) 2026-04-16 09:29:32.660409 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-04-16 09:29:32.660420 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-04-16 09:29:32.660431 | orchestrator | changed: [testbed-node-0] => (item=nova-conductor) 2026-04-16 09:29:32.660441 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-04-16 09:29:32.660452 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-04-16 09:29:32.660463 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-04-16 09:29:32.660474 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-04-16 09:29:32.660484 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-04-16 09:29:32.660495 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-04-16 09:29:32.660506 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-04-16 09:29:32.660517 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-04-16 09:29:32.660569 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-04-16 09:29:32.660581 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-04-16 09:29:32.660592 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-04-16 09:29:32.660602 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-04-16 09:29:32.660613 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-04-16 09:29:32.660624 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-04-16 09:29:32.660634 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-04-16 09:29:32.660645 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-04-16 09:29:32.660656 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-04-16 09:29:32.660667 | orchestrator | changed: [testbed-node-2] => (item=nova-novncproxy) 2026-04-16 09:29:32.660678 | orchestrator | changed: [testbed-node-1] => (item=nova-novncproxy) 2026-04-16 09:29:32.660689 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-04-16 09:29:32.660699 | orchestrator | changed: [testbed-node-0] => (item=nova-novncproxy) 2026-04-16 09:29:32.660710 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-04-16 09:29:32.660721 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-04-16 09:29:32.660732 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-04-16 09:29:32.660742 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-04-16 09:29:32.660753 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-04-16 09:29:32.660764 | orchestrator | 2026-04-16 09:29:32.660775 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-04-16 09:29:32.660786 | orchestrator | 2026-04-16 09:29:32.660796 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-04-16 09:29:32.660807 | orchestrator | Thursday 16 April 2026 09:28:39 +0000 (0:00:29.822) 0:06:07.387 ******** 2026-04-16 09:29:32.660818 | orchestrator | changed: [testbed-node-0] => (item=nova-scheduler) 2026-04-16 09:29:32.660847 | orchestrator | changed: [testbed-node-1] => (item=nova-scheduler) 2026-04-16 09:29:32.660859 | orchestrator | changed: [testbed-node-2] => (item=nova-scheduler) 2026-04-16 09:29:32.660870 | orchestrator | changed: [testbed-node-0] => (item=nova-api) 2026-04-16 09:29:32.660880 | orchestrator | changed: [testbed-node-1] => (item=nova-api) 2026-04-16 09:29:32.660891 | orchestrator | changed: [testbed-node-2] => (item=nova-api) 2026-04-16 09:29:32.660901 | orchestrator | 2026-04-16 09:29:32.660912 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-04-16 09:29:32.660923 | orchestrator | 2026-04-16 09:29:32.660934 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-04-16 09:29:32.660944 | orchestrator | Thursday 16 April 2026 09:28:59 +0000 (0:00:19.793) 0:06:27.181 ******** 2026-04-16 09:29:32.660955 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:29:32.660966 | orchestrator | 2026-04-16 09:29:32.660976 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-04-16 09:29:32.660987 | orchestrator | 2026-04-16 09:29:32.661005 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-04-16 09:29:32.661016 | orchestrator | Thursday 16 April 2026 09:29:17 +0000 (0:00:17.925) 0:06:45.106 ******** 2026-04-16 09:29:32.661026 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:32.661038 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:32.661049 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:29:32.661059 | orchestrator | 2026-04-16 09:29:32.661120 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:29:32.661134 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:29:32.661148 | orchestrator | testbed-node-0 : ok=39  changed=8  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-04-16 09:29:32.661171 | orchestrator | testbed-node-1 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-16 09:29:32.661181 | orchestrator | testbed-node-2 : ok=27  changed=5  unreachable=0 failed=0 skipped=46  rescued=0 ignored=0 2026-04-16 09:29:32.661192 | orchestrator | testbed-node-3 : ok=43  changed=5  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-04-16 09:29:32.661203 | orchestrator | testbed-node-4 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-16 09:29:32.661213 | orchestrator | testbed-node-5 : ok=37  changed=4  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-16 09:29:32.661224 | orchestrator | 2026-04-16 09:29:32.661243 | orchestrator | 2026-04-16 09:29:32.661262 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:29:32.661280 | orchestrator | Thursday 16 April 2026 09:29:32 +0000 (0:00:14.886) 0:06:59.993 ******** 2026-04-16 09:29:32.661298 | orchestrator | =============================================================================== 2026-04-16 09:29:32.661316 | orchestrator | nova-cell : Reload nova cell services to remove RPC version cap -------- 29.82s 2026-04-16 09:29:32.661333 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.02s 2026-04-16 09:29:32.661351 | orchestrator | nova : Reload nova API services to remove RPC version pin -------------- 19.79s 2026-04-16 09:29:32.661369 | orchestrator | nova : Run Nova upgrade checks ----------------------------------------- 19.74s 2026-04-16 09:29:32.661452 | orchestrator | nova : Run Nova API online database migrations ------------------------- 17.93s 2026-04-16 09:29:32.661471 | orchestrator | nova-cell : Run Nova cell online database migrations ------------------- 14.89s 2026-04-16 09:29:32.661487 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 13.39s 2026-04-16 09:29:32.661498 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.92s 2026-04-16 09:29:32.661509 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.91s 2026-04-16 09:29:32.661520 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.82s 2026-04-16 09:29:32.661530 | orchestrator | nova-cell : Copying over libvirt SASL configuration --------------------- 6.48s 2026-04-16 09:29:32.661541 | orchestrator | service-check-containers : nova | Check containers ---------------------- 5.14s 2026-04-16 09:29:32.661552 | orchestrator | nova-cell : Get container facts ----------------------------------------- 5.03s 2026-04-16 09:29:32.661562 | orchestrator | nova-cell : Copy over ceph.conf ----------------------------------------- 4.84s 2026-04-16 09:29:32.661573 | orchestrator | service-check-containers : nova_cell | Check containers ----------------- 4.82s 2026-04-16 09:29:32.661584 | orchestrator | nova-cell : Flush handlers ---------------------------------------------- 4.79s 2026-04-16 09:29:32.661594 | orchestrator | nova-cell : Copying over config.json files for services ----------------- 4.61s 2026-04-16 09:29:32.661605 | orchestrator | service-cert-copy : nova | Copying over extra CA certificates ----------- 4.52s 2026-04-16 09:29:32.661616 | orchestrator | nova-cell : Copying over libvirt configuration -------------------------- 4.50s 2026-04-16 09:29:32.661626 | orchestrator | nova-cell : Pushing secrets key for libvirt ----------------------------- 4.50s 2026-04-16 09:29:32.847306 | orchestrator | + osism apply -a upgrade horizon 2026-04-16 09:29:34.114324 | orchestrator | 2026-04-16 09:29:34 | INFO  | Prepare task for execution of horizon. 2026-04-16 09:29:34.175574 | orchestrator | 2026-04-16 09:29:34 | INFO  | Task a439cf09-53b9-417f-b526-a89f1b9ba19d (horizon) was prepared for execution. 2026-04-16 09:29:34.175661 | orchestrator | 2026-04-16 09:29:34 | INFO  | It takes a moment until task a439cf09-53b9-417f-b526-a89f1b9ba19d (horizon) has been started and output is visible here. 2026-04-16 09:29:42.448736 | orchestrator | 2026-04-16 09:29:42.448823 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:29:42.448832 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:29:42.448841 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:29:42.448853 | orchestrator | 2026-04-16 09:29:42.448870 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:29:42.448876 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:29:42.448882 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:29:42.448893 | orchestrator | Thursday 16 April 2026 09:29:38 +0000 (0:00:01.241) 0:00:01.241 ******** 2026-04-16 09:29:42.448899 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:42.448906 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:42.448912 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:42.448917 | orchestrator | 2026-04-16 09:29:42.448923 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:29:42.448928 | orchestrator | Thursday 16 April 2026 09:29:39 +0000 (0:00:00.664) 0:00:01.906 ******** 2026-04-16 09:29:42.448934 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-04-16 09:29:42.448939 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-04-16 09:29:42.448945 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-04-16 09:29:42.448950 | orchestrator | 2026-04-16 09:29:42.448956 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-04-16 09:29:42.448961 | orchestrator | 2026-04-16 09:29:42.448967 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 09:29:42.448972 | orchestrator | Thursday 16 April 2026 09:29:39 +0000 (0:00:00.675) 0:00:02.581 ******** 2026-04-16 09:29:42.448978 | orchestrator | included: /ansible/roles/horizon/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:29:42.448984 | orchestrator | 2026-04-16 09:29:42.448990 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-04-16 09:29:42.448995 | orchestrator | Thursday 16 April 2026 09:29:40 +0000 (0:00:00.990) 0:00:03.571 ******** 2026-04-16 09:29:42.449006 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:29:42.449050 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:29:42.449062 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:29:47.922551 | orchestrator | 2026-04-16 09:29:47.922649 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-04-16 09:29:47.922677 | orchestrator | Thursday 16 April 2026 09:29:42 +0000 (0:00:01.613) 0:00:05.185 ******** 2026-04-16 09:29:47.922686 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.922695 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.922702 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.922709 | orchestrator | 2026-04-16 09:29:47.922717 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 09:29:47.922724 | orchestrator | Thursday 16 April 2026 09:29:42 +0000 (0:00:00.276) 0:00:05.462 ******** 2026-04-16 09:29:47.922732 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-16 09:29:47.922740 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-16 09:29:47.922747 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-04-16 09:29:47.922755 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-04-16 09:29:47.922762 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-04-16 09:29:47.922770 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-04-16 09:29:47.922777 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-04-16 09:29:47.922784 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-04-16 09:29:47.922791 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-16 09:29:47.922798 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-16 09:29:47.922805 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-04-16 09:29:47.922812 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-04-16 09:29:47.922819 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-04-16 09:29:47.922826 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-04-16 09:29:47.922833 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-04-16 09:29:47.922841 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-04-16 09:29:47.922848 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-04-16 09:29:47.922855 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-04-16 09:29:47.922862 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-04-16 09:29:47.922885 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-04-16 09:29:47.922893 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-04-16 09:29:47.922900 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-04-16 09:29:47.922907 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-04-16 09:29:47.922915 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-04-16 09:29:47.922923 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-04-16 09:29:47.922932 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-04-16 09:29:47.922939 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-04-16 09:29:47.922946 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-04-16 09:29:47.922953 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-04-16 09:29:47.922960 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-04-16 09:29:47.922967 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-04-16 09:29:47.922975 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-04-16 09:29:47.922982 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-04-16 09:29:47.923006 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-04-16 09:29:47.923014 | orchestrator | 2026-04-16 09:29:47.923022 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:29:47.923034 | orchestrator | Thursday 16 April 2026 09:29:43 +0000 (0:00:01.128) 0:00:06.590 ******** 2026-04-16 09:29:47.923043 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.923051 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.923060 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.923068 | orchestrator | 2026-04-16 09:29:47.923076 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:29:47.923084 | orchestrator | Thursday 16 April 2026 09:29:44 +0000 (0:00:00.285) 0:00:06.876 ******** 2026-04-16 09:29:47.923093 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923102 | orchestrator | 2026-04-16 09:29:47.923110 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:29:47.923118 | orchestrator | Thursday 16 April 2026 09:29:44 +0000 (0:00:00.132) 0:00:07.008 ******** 2026-04-16 09:29:47.923126 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923135 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:47.923143 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:47.923152 | orchestrator | 2026-04-16 09:29:47.923160 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:29:47.923168 | orchestrator | Thursday 16 April 2026 09:29:44 +0000 (0:00:00.290) 0:00:07.299 ******** 2026-04-16 09:29:47.923176 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.923185 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.923199 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.923208 | orchestrator | 2026-04-16 09:29:47.923216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:29:47.923224 | orchestrator | Thursday 16 April 2026 09:29:45 +0000 (0:00:00.378) 0:00:07.677 ******** 2026-04-16 09:29:47.923233 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923240 | orchestrator | 2026-04-16 09:29:47.923247 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:29:47.923254 | orchestrator | Thursday 16 April 2026 09:29:45 +0000 (0:00:00.118) 0:00:07.796 ******** 2026-04-16 09:29:47.923261 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923269 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:47.923276 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:47.923283 | orchestrator | 2026-04-16 09:29:47.923290 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:29:47.923297 | orchestrator | Thursday 16 April 2026 09:29:45 +0000 (0:00:00.260) 0:00:08.057 ******** 2026-04-16 09:29:47.923304 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.923312 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.923319 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.923326 | orchestrator | 2026-04-16 09:29:47.923333 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:29:47.923340 | orchestrator | Thursday 16 April 2026 09:29:45 +0000 (0:00:00.278) 0:00:08.335 ******** 2026-04-16 09:29:47.923347 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923355 | orchestrator | 2026-04-16 09:29:47.923362 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:29:47.923369 | orchestrator | Thursday 16 April 2026 09:29:45 +0000 (0:00:00.115) 0:00:08.451 ******** 2026-04-16 09:29:47.923397 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923404 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:47.923412 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:47.923419 | orchestrator | 2026-04-16 09:29:47.923426 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:29:47.923433 | orchestrator | Thursday 16 April 2026 09:29:46 +0000 (0:00:00.373) 0:00:08.825 ******** 2026-04-16 09:29:47.923440 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.923447 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.923455 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.923462 | orchestrator | 2026-04-16 09:29:47.923469 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:29:47.923476 | orchestrator | Thursday 16 April 2026 09:29:46 +0000 (0:00:00.289) 0:00:09.114 ******** 2026-04-16 09:29:47.923483 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923491 | orchestrator | 2026-04-16 09:29:47.923498 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:29:47.923505 | orchestrator | Thursday 16 April 2026 09:29:46 +0000 (0:00:00.120) 0:00:09.234 ******** 2026-04-16 09:29:47.923512 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923519 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:47.923526 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:47.923533 | orchestrator | 2026-04-16 09:29:47.923541 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:29:47.923548 | orchestrator | Thursday 16 April 2026 09:29:46 +0000 (0:00:00.285) 0:00:09.520 ******** 2026-04-16 09:29:47.923555 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.923562 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.923569 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.923576 | orchestrator | 2026-04-16 09:29:47.923584 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:29:47.923591 | orchestrator | Thursday 16 April 2026 09:29:47 +0000 (0:00:00.399) 0:00:09.919 ******** 2026-04-16 09:29:47.923598 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923605 | orchestrator | 2026-04-16 09:29:47.923613 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:29:47.923624 | orchestrator | Thursday 16 April 2026 09:29:47 +0000 (0:00:00.123) 0:00:10.042 ******** 2026-04-16 09:29:47.923632 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:29:47.923639 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:29:47.923646 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:29:47.923653 | orchestrator | 2026-04-16 09:29:47.923660 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:29:47.923668 | orchestrator | Thursday 16 April 2026 09:29:47 +0000 (0:00:00.265) 0:00:10.308 ******** 2026-04-16 09:29:47.923675 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:29:47.923682 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:29:47.923689 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:29:47.923696 | orchestrator | 2026-04-16 09:29:47.923704 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:29:47.923716 | orchestrator | Thursday 16 April 2026 09:29:47 +0000 (0:00:00.271) 0:00:10.580 ******** 2026-04-16 09:30:01.780486 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.780602 | orchestrator | 2026-04-16 09:30:01.780635 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:30:01.780649 | orchestrator | Thursday 16 April 2026 09:29:48 +0000 (0:00:00.146) 0:00:10.726 ******** 2026-04-16 09:30:01.780661 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.780673 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.780683 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.780695 | orchestrator | 2026-04-16 09:30:01.780706 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:30:01.780717 | orchestrator | Thursday 16 April 2026 09:29:48 +0000 (0:00:00.374) 0:00:11.101 ******** 2026-04-16 09:30:01.780728 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:30:01.780740 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:30:01.780751 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:30:01.780762 | orchestrator | 2026-04-16 09:30:01.780773 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:30:01.780784 | orchestrator | Thursday 16 April 2026 09:29:48 +0000 (0:00:00.282) 0:00:11.384 ******** 2026-04-16 09:30:01.780796 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.780808 | orchestrator | 2026-04-16 09:30:01.780819 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:30:01.780830 | orchestrator | Thursday 16 April 2026 09:29:48 +0000 (0:00:00.124) 0:00:11.508 ******** 2026-04-16 09:30:01.780841 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.780852 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.780863 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.780874 | orchestrator | 2026-04-16 09:30:01.780885 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:30:01.780896 | orchestrator | Thursday 16 April 2026 09:29:49 +0000 (0:00:00.279) 0:00:11.787 ******** 2026-04-16 09:30:01.780907 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:30:01.780918 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:30:01.780929 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:30:01.780940 | orchestrator | 2026-04-16 09:30:01.780951 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:30:01.780964 | orchestrator | Thursday 16 April 2026 09:29:49 +0000 (0:00:00.422) 0:00:12.210 ******** 2026-04-16 09:30:01.780977 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.780991 | orchestrator | 2026-04-16 09:30:01.781003 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:30:01.781016 | orchestrator | Thursday 16 April 2026 09:29:49 +0000 (0:00:00.126) 0:00:12.336 ******** 2026-04-16 09:30:01.781029 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.781041 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.781054 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.781067 | orchestrator | 2026-04-16 09:30:01.781079 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:30:01.781117 | orchestrator | Thursday 16 April 2026 09:29:49 +0000 (0:00:00.286) 0:00:12.623 ******** 2026-04-16 09:30:01.781137 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:30:01.781155 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:30:01.781182 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:30:01.781201 | orchestrator | 2026-04-16 09:30:01.781219 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:30:01.781236 | orchestrator | Thursday 16 April 2026 09:29:50 +0000 (0:00:00.292) 0:00:12.916 ******** 2026-04-16 09:30:01.781254 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.781270 | orchestrator | 2026-04-16 09:30:01.781290 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:30:01.781307 | orchestrator | Thursday 16 April 2026 09:29:50 +0000 (0:00:00.117) 0:00:13.033 ******** 2026-04-16 09:30:01.781326 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.781343 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.781362 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.781463 | orchestrator | 2026-04-16 09:30:01.781477 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-04-16 09:30:01.781488 | orchestrator | Thursday 16 April 2026 09:29:50 +0000 (0:00:00.395) 0:00:13.429 ******** 2026-04-16 09:30:01.781499 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:30:01.781510 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:30:01.781521 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:30:01.781532 | orchestrator | 2026-04-16 09:30:01.781543 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-04-16 09:30:01.781554 | orchestrator | Thursday 16 April 2026 09:29:51 +0000 (0:00:00.329) 0:00:13.758 ******** 2026-04-16 09:30:01.781565 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.781576 | orchestrator | 2026-04-16 09:30:01.781586 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-04-16 09:30:01.781597 | orchestrator | Thursday 16 April 2026 09:29:51 +0000 (0:00:00.101) 0:00:13.860 ******** 2026-04-16 09:30:01.781608 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.781619 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.781630 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.781641 | orchestrator | 2026-04-16 09:30:01.781651 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-04-16 09:30:01.781662 | orchestrator | Thursday 16 April 2026 09:29:51 +0000 (0:00:00.255) 0:00:14.116 ******** 2026-04-16 09:30:01.781673 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:30:01.781684 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:30:01.781695 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:30:01.781704 | orchestrator | 2026-04-16 09:30:01.781714 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-04-16 09:30:01.781723 | orchestrator | Thursday 16 April 2026 09:29:53 +0000 (0:00:01.597) 0:00:15.713 ******** 2026-04-16 09:30:01.781733 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-16 09:30:01.781743 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-16 09:30:01.781752 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-04-16 09:30:01.781762 | orchestrator | 2026-04-16 09:30:01.781772 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-04-16 09:30:01.781800 | orchestrator | Thursday 16 April 2026 09:29:54 +0000 (0:00:01.864) 0:00:17.577 ******** 2026-04-16 09:30:01.781820 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-16 09:30:01.781831 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-16 09:30:01.781841 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-04-16 09:30:01.781851 | orchestrator | 2026-04-16 09:30:01.781860 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-04-16 09:30:01.781881 | orchestrator | Thursday 16 April 2026 09:29:56 +0000 (0:00:01.923) 0:00:19.501 ******** 2026-04-16 09:30:01.781891 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-16 09:30:01.781901 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-16 09:30:01.781911 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-04-16 09:30:01.781920 | orchestrator | 2026-04-16 09:30:01.781930 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-04-16 09:30:01.781939 | orchestrator | Thursday 16 April 2026 09:29:58 +0000 (0:00:01.475) 0:00:20.977 ******** 2026-04-16 09:30:01.781949 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.781959 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.781968 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.781978 | orchestrator | 2026-04-16 09:30:01.781988 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-04-16 09:30:01.781997 | orchestrator | Thursday 16 April 2026 09:29:58 +0000 (0:00:00.308) 0:00:21.285 ******** 2026-04-16 09:30:01.782007 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:01.782076 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:01.782087 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:01.782097 | orchestrator | 2026-04-16 09:30:01.782107 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 09:30:01.782116 | orchestrator | Thursday 16 April 2026 09:29:59 +0000 (0:00:00.574) 0:00:21.859 ******** 2026-04-16 09:30:01.782126 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:30:01.782136 | orchestrator | 2026-04-16 09:30:01.782146 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-04-16 09:30:01.782155 | orchestrator | Thursday 16 April 2026 09:30:00 +0000 (0:00:01.027) 0:00:22.887 ******** 2026-04-16 09:30:01.782170 | orchestrator | ok: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:30:01.782208 | orchestrator | ok: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:30:02.880906 | orchestrator | ok: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:30:02.881036 | orchestrator | 2026-04-16 09:30:02.881054 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-04-16 09:30:02.881067 | orchestrator | Thursday 16 April 2026 09:30:02 +0000 (0:00:02.057) 0:00:24.944 ******** 2026-04-16 09:30:02.881101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:02.881116 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:02.881137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:02.881157 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:02.881178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:05.819890 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:05.819995 | orchestrator | 2026-04-16 09:30:05.820012 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-04-16 09:30:05.820024 | orchestrator | Thursday 16 April 2026 09:30:02 +0000 (0:00:00.689) 0:00:25.634 ******** 2026-04-16 09:30:05.820057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:05.820098 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:05.820131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:05.820154 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:30:05.820175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:05.820187 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:30:05.820198 | orchestrator | 2026-04-16 09:30:05.820210 | orchestrator | TASK [service-check-containers : horizon | Check containers] ******************* 2026-04-16 09:30:05.820221 | orchestrator | Thursday 16 April 2026 09:30:04 +0000 (0:00:01.261) 0:00:26.895 ******** 2026-04-16 09:30:05.820248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:30:06.809108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:30:06.809306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-04-16 09:30:06.809465 | orchestrator | 2026-04-16 09:30:06.809488 | orchestrator | TASK [service-check-containers : horizon | Notify handlers to restart containers] *** 2026-04-16 09:30:06.809501 | orchestrator | Thursday 16 April 2026 09:30:05 +0000 (0:00:01.743) 0:00:28.639 ******** 2026-04-16 09:30:06.809514 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:30:06.809528 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:30:06.809539 | orchestrator | } 2026-04-16 09:30:06.809550 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:30:06.809561 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:30:06.809573 | orchestrator | } 2026-04-16 09:30:06.809583 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:30:06.809597 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:30:06.809609 | orchestrator | } 2026-04-16 09:30:06.809622 | orchestrator | 2026-04-16 09:30:06.809636 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:30:06.809649 | orchestrator | Thursday 16 April 2026 09:30:06 +0000 (0:00:00.351) 0:00:28.991 ******** 2026-04-16 09:30:06.809664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:30:06.809690 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:30:06.809727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:31:13.705621 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:31:13.705715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-04-16 09:31:13.705751 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:31:13.705760 | orchestrator | 2026-04-16 09:31:13.705768 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 09:31:13.705777 | orchestrator | Thursday 16 April 2026 09:30:07 +0000 (0:00:01.216) 0:00:30.208 ******** 2026-04-16 09:31:13.705784 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:31:13.705791 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:31:13.705799 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:31:13.705806 | orchestrator | 2026-04-16 09:31:13.705814 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-04-16 09:31:13.705822 | orchestrator | Thursday 16 April 2026 09:30:08 +0000 (0:00:00.493) 0:00:30.701 ******** 2026-04-16 09:31:13.705830 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:31:13.705837 | orchestrator | 2026-04-16 09:31:13.705845 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-04-16 09:31:13.705852 | orchestrator | Thursday 16 April 2026 09:30:08 +0000 (0:00:00.909) 0:00:31.611 ******** 2026-04-16 09:31:13.705860 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:31:13.705867 | orchestrator | 2026-04-16 09:31:13.705874 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-16 09:31:13.705882 | orchestrator | Thursday 16 April 2026 09:30:44 +0000 (0:00:35.118) 0:01:06.730 ******** 2026-04-16 09:31:13.705889 | orchestrator | 2026-04-16 09:31:13.705929 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-16 09:31:13.705934 | orchestrator | Thursday 16 April 2026 09:30:44 +0000 (0:00:00.072) 0:01:06.802 ******** 2026-04-16 09:31:13.705939 | orchestrator | 2026-04-16 09:31:13.705943 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-04-16 09:31:13.705948 | orchestrator | Thursday 16 April 2026 09:30:44 +0000 (0:00:00.274) 0:01:07.077 ******** 2026-04-16 09:31:13.705952 | orchestrator | 2026-04-16 09:31:13.705956 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-04-16 09:31:13.705961 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 09:31:13.705965 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 09:31:13.705974 | orchestrator | Thursday 16 April 2026 09:30:44 +0000 (0:00:00.075) 0:01:07.152 ******** 2026-04-16 09:31:13.705984 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:31:13.705988 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:31:13.705993 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:31:13.705997 | orchestrator | 2026-04-16 09:31:13.706064 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:31:13.706070 | orchestrator | testbed-node-0 : ok=36  changed=6  unreachable=0 failed=0 skipped=26  rescued=0 ignored=0 2026-04-16 09:31:13.706076 | orchestrator | testbed-node-1 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-16 09:31:13.706080 | orchestrator | testbed-node-2 : ok=35  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-16 09:31:13.706085 | orchestrator | 2026-04-16 09:31:13.706089 | orchestrator | 2026-04-16 09:31:13.706093 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:31:13.706104 | orchestrator | Thursday 16 April 2026 09:31:13 +0000 (0:00:28.865) 0:01:36.018 ******** 2026-04-16 09:31:13.706112 | orchestrator | =============================================================================== 2026-04-16 09:31:13.706123 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 35.12s 2026-04-16 09:31:13.706131 | orchestrator | horizon : Restart horizon container ------------------------------------ 28.87s 2026-04-16 09:31:13.706139 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 2.06s 2026-04-16 09:31:13.706146 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.92s 2026-04-16 09:31:13.706154 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.86s 2026-04-16 09:31:13.706161 | orchestrator | service-check-containers : horizon | Check containers ------------------- 1.74s 2026-04-16 09:31:13.706168 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.61s 2026-04-16 09:31:13.706175 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.60s 2026-04-16 09:31:13.706182 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.48s 2026-04-16 09:31:13.706189 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.26s 2026-04-16 09:31:13.706197 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.22s 2026-04-16 09:31:13.706204 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.13s 2026-04-16 09:31:13.706212 | orchestrator | horizon : include_tasks ------------------------------------------------- 1.03s 2026-04-16 09:31:13.706219 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.99s 2026-04-16 09:31:13.706228 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.91s 2026-04-16 09:31:13.706236 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2026-04-16 09:31:13.706244 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-04-16 09:31:13.706259 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.66s 2026-04-16 09:31:13.706267 | orchestrator | horizon : Copying over custom themes ------------------------------------ 0.57s 2026-04-16 09:31:13.706274 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-04-16 09:31:13.891331 | orchestrator | + osism apply -a upgrade skyline 2026-04-16 09:31:15.172015 | orchestrator | 2026-04-16 09:31:15 | INFO  | Prepare task for execution of skyline. 2026-04-16 09:31:15.243433 | orchestrator | 2026-04-16 09:31:15 | INFO  | Task 5140f40e-0857-4f63-95c5-3a2def75f51b (skyline) was prepared for execution. 2026-04-16 09:31:15.243532 | orchestrator | 2026-04-16 09:31:15 | INFO  | It takes a moment until task 5140f40e-0857-4f63-95c5-3a2def75f51b (skyline) has been started and output is visible here. 2026-04-16 09:31:32.464019 | orchestrator | 2026-04-16 09:31:32.464138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:31:32.464150 | orchestrator | 2026-04-16 09:31:32.464158 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:31:32.464166 | orchestrator | Thursday 16 April 2026 09:31:20 +0000 (0:00:02.116) 0:00:02.116 ******** 2026-04-16 09:31:32.464174 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:31:32.464182 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:31:32.464190 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:31:32.464197 | orchestrator | 2026-04-16 09:31:32.464204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:31:32.464211 | orchestrator | Thursday 16 April 2026 09:31:23 +0000 (0:00:02.464) 0:00:04.581 ******** 2026-04-16 09:31:32.464219 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-04-16 09:31:32.464226 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-04-16 09:31:32.464234 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-04-16 09:31:32.464241 | orchestrator | 2026-04-16 09:31:32.464248 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-04-16 09:31:32.464255 | orchestrator | 2026-04-16 09:31:32.464263 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-16 09:31:32.464270 | orchestrator | Thursday 16 April 2026 09:31:24 +0000 (0:00:01.569) 0:00:06.151 ******** 2026-04-16 09:31:32.464277 | orchestrator | included: /ansible/roles/skyline/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:31:32.464285 | orchestrator | 2026-04-16 09:31:32.464292 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-04-16 09:31:32.464300 | orchestrator | Thursday 16 April 2026 09:31:26 +0000 (0:00:02.059) 0:00:08.210 ******** 2026-04-16 09:31:32.464312 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:32.464323 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:32.464379 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:32.464400 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:32.464409 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:32.464417 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:32.464431 | orchestrator | 2026-04-16 09:31:32.464439 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-04-16 09:31:32.464446 | orchestrator | Thursday 16 April 2026 09:31:29 +0000 (0:00:02.924) 0:00:11.134 ******** 2026-04-16 09:31:32.464454 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:31:32.464461 | orchestrator | 2026-04-16 09:31:32.464472 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-04-16 09:31:32.464480 | orchestrator | Thursday 16 April 2026 09:31:31 +0000 (0:00:01.645) 0:00:12.779 ******** 2026-04-16 09:31:32.464495 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:34.890094 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:34.890207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:34.890242 | orchestrator | ok: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:34.890299 | orchestrator | ok: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:34.890314 | orchestrator | ok: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:34.890327 | orchestrator | 2026-04-16 09:31:34.890341 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-04-16 09:31:34.890404 | orchestrator | Thursday 16 April 2026 09:31:34 +0000 (0:00:03.247) 0:00:16.027 ******** 2026-04-16 09:31:34.890419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:34.890445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:34.890458 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:31:34.890479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:36.628734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:36.628842 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:31:36.628861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:36.628915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:36.628929 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:31:36.628941 | orchestrator | 2026-04-16 09:31:36.628952 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-04-16 09:31:36.628965 | orchestrator | Thursday 16 April 2026 09:31:36 +0000 (0:00:01.592) 0:00:17.619 ******** 2026-04-16 09:31:36.628997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:36.629027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:36.629048 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:31:36.629060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:36.629078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:36.629090 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:31:36.629109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:46.619234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:46.619445 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:31:46.619467 | orchestrator | 2026-04-16 09:31:46.619481 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-04-16 09:31:46.619494 | orchestrator | Thursday 16 April 2026 09:31:37 +0000 (0:00:01.748) 0:00:19.368 ******** 2026-04-16 09:31:46.619522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:46.619536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:46.619570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:46.619585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:46.619628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:46.619654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:46.619667 | orchestrator | 2026-04-16 09:31:46.619678 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-04-16 09:31:46.619690 | orchestrator | Thursday 16 April 2026 09:31:41 +0000 (0:00:03.663) 0:00:23.031 ******** 2026-04-16 09:31:46.619701 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-16 09:31:46.619713 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-16 09:31:46.619723 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-04-16 09:31:46.619734 | orchestrator | 2026-04-16 09:31:46.619746 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-04-16 09:31:46.619759 | orchestrator | Thursday 16 April 2026 09:31:44 +0000 (0:00:03.470) 0:00:26.501 ******** 2026-04-16 09:31:46.619784 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-16 09:31:54.690463 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-16 09:31:54.690565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-04-16 09:31:54.690574 | orchestrator | 2026-04-16 09:31:54.690582 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-04-16 09:31:54.690589 | orchestrator | Thursday 16 April 2026 09:31:47 +0000 (0:00:02.916) 0:00:29.418 ******** 2026-04-16 09:31:54.690600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:54.690622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:54.690630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:54.690652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:54.690664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:54.690674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:54.690681 | orchestrator | 2026-04-16 09:31:54.690687 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-04-16 09:31:54.690693 | orchestrator | Thursday 16 April 2026 09:31:51 +0000 (0:00:03.767) 0:00:33.186 ******** 2026-04-16 09:31:54.690699 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:31:54.690706 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:31:54.690712 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:31:54.690722 | orchestrator | 2026-04-16 09:31:54.690732 | orchestrator | TASK [service-check-containers : skyline | Check containers] ******************* 2026-04-16 09:31:54.690741 | orchestrator | Thursday 16 April 2026 09:31:53 +0000 (0:00:01.687) 0:00:34.873 ******** 2026-04-16 09:31:54.690752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:54.690777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:58.473037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-04-16 09:31:58.473134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:58.473146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:58.473187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-04-16 09:31:58.473197 | orchestrator | 2026-04-16 09:31:58.473206 | orchestrator | TASK [service-check-containers : skyline | Notify handlers to restart containers] *** 2026-04-16 09:31:58.473215 | orchestrator | Thursday 16 April 2026 09:31:56 +0000 (0:00:03.266) 0:00:38.140 ******** 2026-04-16 09:31:58.473223 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:31:58.473231 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:31:58.473238 | orchestrator | } 2026-04-16 09:31:58.473246 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:31:58.473253 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:31:58.473260 | orchestrator | } 2026-04-16 09:31:58.473267 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:31:58.473274 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:31:58.473281 | orchestrator | } 2026-04-16 09:31:58.473288 | orchestrator | 2026-04-16 09:31:58.473297 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:31:58.473304 | orchestrator | Thursday 16 April 2026 09:31:57 +0000 (0:00:01.365) 0:00:39.505 ******** 2026-04-16 09:31:58.473317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:58.473326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:31:58.473340 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:31:58.473404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:31:58.473430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:32:34.345642 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:32:34.345777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-04-16 09:32:34.345818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-04-16 09:32:34.345832 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:32:34.345843 | orchestrator | 2026-04-16 09:32:34.345854 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-16 09:32:34.345865 | orchestrator | Thursday 16 April 2026 09:32:00 +0000 (0:00:02.035) 0:00:41.540 ******** 2026-04-16 09:32:34.345875 | orchestrator | 2026-04-16 09:32:34.345885 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-16 09:32:34.345895 | orchestrator | Thursday 16 April 2026 09:32:00 +0000 (0:00:00.474) 0:00:42.015 ******** 2026-04-16 09:32:34.345905 | orchestrator | 2026-04-16 09:32:34.345914 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-04-16 09:32:34.345924 | orchestrator | Thursday 16 April 2026 09:32:00 +0000 (0:00:00.435) 0:00:42.450 ******** 2026-04-16 09:32:34.345934 | orchestrator | 2026-04-16 09:32:34.345943 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-04-16 09:32:34.345953 | orchestrator | Thursday 16 April 2026 09:32:01 +0000 (0:00:00.845) 0:00:43.296 ******** 2026-04-16 09:32:34.345963 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:32:34.345972 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:32:34.345982 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:32:34.345992 | orchestrator | 2026-04-16 09:32:34.346002 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-04-16 09:32:34.346068 | orchestrator | Thursday 16 April 2026 09:32:16 +0000 (0:00:14.641) 0:00:57.938 ******** 2026-04-16 09:32:34.346080 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:32:34.346090 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:32:34.346134 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:32:34.346146 | orchestrator | 2026-04-16 09:32:34.346157 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:32:34.346168 | orchestrator | testbed-node-0 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 09:32:34.346179 | orchestrator | testbed-node-1 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 09:32:34.346191 | orchestrator | testbed-node-2 : ok=14  changed=7  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 09:32:34.346202 | orchestrator | 2026-04-16 09:32:34.346213 | orchestrator | 2026-04-16 09:32:34.346224 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:32:34.346236 | orchestrator | Thursday 16 April 2026 09:32:34 +0000 (0:00:17.603) 0:01:15.541 ******** 2026-04-16 09:32:34.346254 | orchestrator | =============================================================================== 2026-04-16 09:32:34.346292 | orchestrator | skyline : Restart skyline-console container ---------------------------- 17.60s 2026-04-16 09:32:34.346320 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 14.64s 2026-04-16 09:32:34.346337 | orchestrator | skyline : Copying over config.json files for services ------------------- 3.77s 2026-04-16 09:32:34.346389 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 3.66s 2026-04-16 09:32:34.346405 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 3.47s 2026-04-16 09:32:34.346422 | orchestrator | service-check-containers : skyline | Check containers ------------------- 3.27s 2026-04-16 09:32:34.346440 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 3.25s 2026-04-16 09:32:34.346454 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 2.92s 2026-04-16 09:32:34.346470 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.92s 2026-04-16 09:32:34.346485 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.46s 2026-04-16 09:32:34.346502 | orchestrator | skyline : include_tasks ------------------------------------------------- 2.06s 2026-04-16 09:32:34.346519 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.04s 2026-04-16 09:32:34.346535 | orchestrator | skyline : Flush handlers ------------------------------------------------ 1.76s 2026-04-16 09:32:34.346553 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.75s 2026-04-16 09:32:34.346570 | orchestrator | skyline : Copying over custom logos ------------------------------------- 1.69s 2026-04-16 09:32:34.346585 | orchestrator | skyline : include_tasks ------------------------------------------------- 1.65s 2026-04-16 09:32:34.346602 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS certificate --- 1.59s 2026-04-16 09:32:34.346634 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.57s 2026-04-16 09:32:34.346651 | orchestrator | service-check-containers : skyline | Notify handlers to restart containers --- 1.37s 2026-04-16 09:32:34.524446 | orchestrator | + osism apply -a upgrade glance 2026-04-16 09:32:35.765098 | orchestrator | 2026-04-16 09:32:35 | INFO  | Prepare task for execution of glance. 2026-04-16 09:32:35.827759 | orchestrator | 2026-04-16 09:32:35 | INFO  | Task 226e3844-8826-4045-bb92-0e70957edc72 (glance) was prepared for execution. 2026-04-16 09:32:35.827870 | orchestrator | 2026-04-16 09:32:35 | INFO  | It takes a moment until task 226e3844-8826-4045-bb92-0e70957edc72 (glance) has been started and output is visible here. 2026-04-16 09:33:19.808532 | orchestrator | 2026-04-16 09:33:19.808645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:33:19.808662 | orchestrator | 2026-04-16 09:33:19.808673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:33:19.808683 | orchestrator | Thursday 16 April 2026 09:32:40 +0000 (0:00:01.719) 0:00:01.719 ******** 2026-04-16 09:33:19.808693 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:33:19.808709 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:33:19.808727 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:33:19.808744 | orchestrator | 2026-04-16 09:33:19.808769 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:33:19.808786 | orchestrator | Thursday 16 April 2026 09:32:42 +0000 (0:00:01.721) 0:00:03.441 ******** 2026-04-16 09:33:19.808804 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-04-16 09:33:19.808823 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-04-16 09:33:19.808840 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-04-16 09:33:19.808857 | orchestrator | 2026-04-16 09:33:19.808874 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-04-16 09:33:19.808893 | orchestrator | 2026-04-16 09:33:19.808911 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:33:19.808929 | orchestrator | Thursday 16 April 2026 09:32:45 +0000 (0:00:03.247) 0:00:06.689 ******** 2026-04-16 09:33:19.808972 | orchestrator | included: /ansible/roles/glance/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:33:19.808984 | orchestrator | 2026-04-16 09:33:19.808994 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:33:19.809003 | orchestrator | Thursday 16 April 2026 09:32:47 +0000 (0:00:02.304) 0:00:08.993 ******** 2026-04-16 09:33:19.809013 | orchestrator | included: /ansible/roles/glance/tasks/rolling_upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:33:19.809023 | orchestrator | 2026-04-16 09:33:19.809032 | orchestrator | TASK [glance : Start Glance upgrade] ******************************************* 2026-04-16 09:33:19.809042 | orchestrator | Thursday 16 April 2026 09:32:49 +0000 (0:00:01.550) 0:00:10.543 ******** 2026-04-16 09:33:19.809053 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:33:19.809064 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:33:19.809074 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:33:19.809086 | orchestrator | 2026-04-16 09:33:19.809096 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:33:19.809107 | orchestrator | Thursday 16 April 2026 09:32:50 +0000 (0:00:01.175) 0:00:11.719 ******** 2026-04-16 09:33:19.809120 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:33:19.809138 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:33:19.809154 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-0 2026-04-16 09:33:19.809171 | orchestrator | 2026-04-16 09:33:19.809187 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-16 09:33:19.809202 | orchestrator | Thursday 16 April 2026 09:32:52 +0000 (0:00:01.638) 0:00:13.358 ******** 2026-04-16 09:33:19.809242 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:33:19.809267 | orchestrator | 2026-04-16 09:33:19.809284 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:33:19.809301 | orchestrator | Thursday 16 April 2026 09:32:56 +0000 (0:00:04.419) 0:00:17.778 ******** 2026-04-16 09:33:19.809318 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0 2026-04-16 09:33:19.809389 | orchestrator | 2026-04-16 09:33:19.809425 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-16 09:33:19.809438 | orchestrator | Thursday 16 April 2026 09:32:58 +0000 (0:00:01.508) 0:00:19.287 ******** 2026-04-16 09:33:19.809459 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:33:19.809468 | orchestrator | 2026-04-16 09:33:19.809478 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-16 09:33:19.809488 | orchestrator | Thursday 16 April 2026 09:33:02 +0000 (0:00:04.451) 0:00:23.739 ******** 2026-04-16 09:33:19.809498 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-16 09:33:19.809510 | orchestrator | 2026-04-16 09:33:19.809519 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-16 09:33:19.809529 | orchestrator | Thursday 16 April 2026 09:33:05 +0000 (0:00:02.529) 0:00:26.269 ******** 2026-04-16 09:33:19.809539 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-16 09:33:19.809549 | orchestrator | 2026-04-16 09:33:19.809558 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-16 09:33:19.809568 | orchestrator | Thursday 16 April 2026 09:33:07 +0000 (0:00:01.971) 0:00:28.240 ******** 2026-04-16 09:33:19.809577 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:33:19.809587 | orchestrator | 2026-04-16 09:33:19.809597 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-16 09:33:19.809606 | orchestrator | Thursday 16 April 2026 09:33:08 +0000 (0:00:01.483) 0:00:29.723 ******** 2026-04-16 09:33:19.809616 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:33:19.809625 | orchestrator | 2026-04-16 09:33:19.809635 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-16 09:33:19.809644 | orchestrator | Thursday 16 April 2026 09:33:09 +0000 (0:00:01.098) 0:00:30.822 ******** 2026-04-16 09:33:19.809654 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:33:19.809664 | orchestrator | 2026-04-16 09:33:19.809673 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:33:19.809683 | orchestrator | Thursday 16 April 2026 09:33:10 +0000 (0:00:01.113) 0:00:31.936 ******** 2026-04-16 09:33:19.809692 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0 2026-04-16 09:33:19.809702 | orchestrator | 2026-04-16 09:33:19.809711 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-16 09:33:19.809724 | orchestrator | Thursday 16 April 2026 09:33:12 +0000 (0:00:01.527) 0:00:33.464 ******** 2026-04-16 09:33:19.809750 | orchestrator | ok: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:33:19.809777 | orchestrator | 2026-04-16 09:33:19.809793 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-16 09:33:19.809809 | orchestrator | Thursday 16 April 2026 09:33:17 +0000 (0:00:04.590) 0:00:38.054 ******** 2026-04-16 09:33:19.809840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:35:07.815416 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.815542 | orchestrator | 2026-04-16 09:35:07.815561 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-16 09:35:07.815575 | orchestrator | Thursday 16 April 2026 09:33:20 +0000 (0:00:03.870) 0:00:41.924 ******** 2026-04-16 09:35:07.815606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:35:07.815646 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.815659 | orchestrator | 2026-04-16 09:35:07.815671 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-16 09:35:07.815682 | orchestrator | Thursday 16 April 2026 09:33:24 +0000 (0:00:03.924) 0:00:45.849 ******** 2026-04-16 09:35:07.815693 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.815704 | orchestrator | 2026-04-16 09:35:07.815715 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-16 09:35:07.815726 | orchestrator | Thursday 16 April 2026 09:33:28 +0000 (0:00:04.140) 0:00:49.989 ******** 2026-04-16 09:35:07.815759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:35:07.815773 | orchestrator | 2026-04-16 09:35:07.815785 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-16 09:35:07.815795 | orchestrator | Thursday 16 April 2026 09:33:34 +0000 (0:00:05.068) 0:00:55.057 ******** 2026-04-16 09:35:07.815806 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:35:07.815817 | orchestrator | 2026-04-16 09:35:07.815828 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-16 09:35:07.815839 | orchestrator | Thursday 16 April 2026 09:33:40 +0000 (0:00:06.347) 0:01:01.405 ******** 2026-04-16 09:35:07.815850 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.815861 | orchestrator | 2026-04-16 09:35:07.815873 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-16 09:35:07.815889 | orchestrator | Thursday 16 April 2026 09:33:44 +0000 (0:00:03.984) 0:01:05.389 ******** 2026-04-16 09:35:07.815908 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.815923 | orchestrator | 2026-04-16 09:35:07.815938 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-16 09:35:07.815964 | orchestrator | Thursday 16 April 2026 09:33:48 +0000 (0:00:04.144) 0:01:09.534 ******** 2026-04-16 09:35:07.815985 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.816003 | orchestrator | 2026-04-16 09:35:07.816021 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-16 09:35:07.816039 | orchestrator | Thursday 16 April 2026 09:33:52 +0000 (0:00:03.997) 0:01:13.531 ******** 2026-04-16 09:35:07.816072 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.816092 | orchestrator | 2026-04-16 09:35:07.816120 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-16 09:35:07.816140 | orchestrator | Thursday 16 April 2026 09:33:53 +0000 (0:00:01.096) 0:01:14.628 ******** 2026-04-16 09:35:07.816159 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-16 09:35:07.816174 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.816187 | orchestrator | 2026-04-16 09:35:07.816199 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-16 09:35:07.816212 | orchestrator | Thursday 16 April 2026 09:33:57 +0000 (0:00:03.723) 0:01:18.352 ******** 2026-04-16 09:35:07.816226 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.816239 | orchestrator | 2026-04-16 09:35:07.816250 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-16 09:35:07.816260 | orchestrator | Thursday 16 April 2026 09:34:01 +0000 (0:00:03.818) 0:01:22.170 ******** 2026-04-16 09:35:07.816271 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.816282 | orchestrator | 2026-04-16 09:35:07.816292 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:35:07.816303 | orchestrator | Thursday 16 April 2026 09:34:04 +0000 (0:00:03.770) 0:01:25.941 ******** 2026-04-16 09:35:07.816341 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:35:07.816355 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:35:07.816366 | orchestrator | included: /ansible/roles/glance/tasks/stop_service.yml for testbed-node-0 2026-04-16 09:35:07.816377 | orchestrator | 2026-04-16 09:35:07.816388 | orchestrator | TASK [glance : Stop glance service] ******************************************** 2026-04-16 09:35:07.816400 | orchestrator | Thursday 16 April 2026 09:34:06 +0000 (0:00:01.737) 0:01:27.679 ******** 2026-04-16 09:35:07.816411 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:35:07.816422 | orchestrator | 2026-04-16 09:35:07.816433 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-04-16 09:35:07.816444 | orchestrator | Thursday 16 April 2026 09:34:18 +0000 (0:00:11.710) 0:01:39.390 ******** 2026-04-16 09:35:07.816455 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:35:07.816465 | orchestrator | 2026-04-16 09:35:07.816476 | orchestrator | TASK [glance : Running Glance database expand container] *********************** 2026-04-16 09:35:07.816487 | orchestrator | Thursday 16 April 2026 09:34:21 +0000 (0:00:03.266) 0:01:42.656 ******** 2026-04-16 09:35:07.816498 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:35:07.816509 | orchestrator | 2026-04-16 09:35:07.816520 | orchestrator | TASK [glance : Running Glance database migrate container] ********************** 2026-04-16 09:35:07.816530 | orchestrator | Thursday 16 April 2026 09:34:48 +0000 (0:00:26.396) 0:02:09.053 ******** 2026-04-16 09:35:07.816541 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:35:07.816552 | orchestrator | 2026-04-16 09:35:07.816563 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:35:07.816574 | orchestrator | Thursday 16 April 2026 09:35:02 +0000 (0:00:14.936) 0:02:23.989 ******** 2026-04-16 09:35:07.816585 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:35:07.816596 | orchestrator | included: /ansible/roles/glance/tasks/config.yml for testbed-node-1, testbed-node-2 2026-04-16 09:35:07.816606 | orchestrator | 2026-04-16 09:35:07.816617 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-04-16 09:35:07.816628 | orchestrator | Thursday 16 April 2026 09:35:04 +0000 (0:00:01.313) 0:02:25.303 ******** 2026-04-16 09:35:07.816655 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:35:32.306943 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:35:32.307066 | orchestrator | 2026-04-16 09:35:32.307084 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:35:32.307098 | orchestrator | Thursday 16 April 2026 09:35:09 +0000 (0:00:04.977) 0:02:30.280 ******** 2026-04-16 09:35:32.307110 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-1, testbed-node-2 2026-04-16 09:35:32.307123 | orchestrator | 2026-04-16 09:35:32.307134 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-04-16 09:35:32.307146 | orchestrator | Thursday 16 April 2026 09:35:10 +0000 (0:00:01.187) 0:02:31.468 ******** 2026-04-16 09:35:32.307158 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:35:32.307171 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:35:32.307182 | orchestrator | 2026-04-16 09:35:32.307194 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-04-16 09:35:32.307205 | orchestrator | Thursday 16 April 2026 09:35:14 +0000 (0:00:04.555) 0:02:36.023 ******** 2026-04-16 09:35:32.307242 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-16 09:35:32.307257 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-16 09:35:32.307269 | orchestrator | 2026-04-16 09:35:32.307280 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-04-16 09:35:32.307292 | orchestrator | Thursday 16 April 2026 09:35:17 +0000 (0:00:02.252) 0:02:38.276 ******** 2026-04-16 09:35:32.307303 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-16 09:35:32.307362 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'pool': 'images', 'user': 'glance', 'enabled': True}) 2026-04-16 09:35:32.307373 | orchestrator | 2026-04-16 09:35:32.307384 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-04-16 09:35:32.307395 | orchestrator | Thursday 16 April 2026 09:35:19 +0000 (0:00:02.033) 0:02:40.309 ******** 2026-04-16 09:35:32.307405 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:35:32.307416 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:35:32.307429 | orchestrator | 2026-04-16 09:35:32.307441 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-04-16 09:35:32.307453 | orchestrator | Thursday 16 April 2026 09:35:20 +0000 (0:00:01.670) 0:02:41.980 ******** 2026-04-16 09:35:32.307466 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:35:32.307479 | orchestrator | 2026-04-16 09:35:32.307491 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-04-16 09:35:32.307503 | orchestrator | Thursday 16 April 2026 09:35:22 +0000 (0:00:01.139) 0:02:43.120 ******** 2026-04-16 09:35:32.307515 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:35:32.307527 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:35:32.307540 | orchestrator | 2026-04-16 09:35:32.307551 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:35:32.307564 | orchestrator | Thursday 16 April 2026 09:35:23 +0000 (0:00:01.298) 0:02:44.419 ******** 2026-04-16 09:35:32.307577 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-1, testbed-node-2 2026-04-16 09:35:32.307589 | orchestrator | 2026-04-16 09:35:32.307620 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-04-16 09:35:32.307640 | orchestrator | Thursday 16 April 2026 09:35:24 +0000 (0:00:01.207) 0:02:45.626 ******** 2026-04-16 09:35:32.307654 | orchestrator | ok: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:35:32.307681 | orchestrator | ok: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:35:32.307694 | orchestrator | 2026-04-16 09:35:32.307705 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-04-16 09:35:32.307716 | orchestrator | Thursday 16 April 2026 09:35:29 +0000 (0:00:04.920) 0:02:50.546 ******** 2026-04-16 09:35:32.307744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:35:45.589118 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:35:45.589240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:35:45.589265 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:35:45.589277 | orchestrator | 2026-04-16 09:35:45.589289 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-04-16 09:35:45.589383 | orchestrator | Thursday 16 April 2026 09:35:33 +0000 (0:00:04.129) 0:02:54.676 ******** 2026-04-16 09:35:45.589408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:35:45.589417 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:35:45.589464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:35:45.589477 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:35:45.589489 | orchestrator | 2026-04-16 09:35:45.589500 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-04-16 09:35:45.589512 | orchestrator | Thursday 16 April 2026 09:35:37 +0000 (0:00:03.863) 0:02:58.539 ******** 2026-04-16 09:35:45.589524 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:35:45.589535 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:35:45.589546 | orchestrator | 2026-04-16 09:35:45.589558 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-04-16 09:35:45.589569 | orchestrator | Thursday 16 April 2026 09:35:41 +0000 (0:00:04.267) 0:03:02.806 ******** 2026-04-16 09:35:45.589589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:35:45.589623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:36:27.931252 | orchestrator | 2026-04-16 09:36:27.931472 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-04-16 09:36:27.931492 | orchestrator | Thursday 16 April 2026 09:35:46 +0000 (0:00:04.910) 0:03:07.717 ******** 2026-04-16 09:36:27.931503 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:36:27.931522 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:36:27.931539 | orchestrator | 2026-04-16 09:36:27.931557 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-04-16 09:36:27.931575 | orchestrator | Thursday 16 April 2026 09:35:53 +0000 (0:00:06.822) 0:03:14.540 ******** 2026-04-16 09:36:27.931593 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.931611 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.931629 | orchestrator | 2026-04-16 09:36:27.931646 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-04-16 09:36:27.931684 | orchestrator | Thursday 16 April 2026 09:35:57 +0000 (0:00:04.090) 0:03:18.630 ******** 2026-04-16 09:36:27.931703 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.931721 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.931741 | orchestrator | 2026-04-16 09:36:27.931760 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-04-16 09:36:27.931779 | orchestrator | Thursday 16 April 2026 09:36:01 +0000 (0:00:03.649) 0:03:22.279 ******** 2026-04-16 09:36:27.931799 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.931813 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.931825 | orchestrator | 2026-04-16 09:36:27.931837 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-04-16 09:36:27.931850 | orchestrator | Thursday 16 April 2026 09:36:05 +0000 (0:00:03.855) 0:03:26.135 ******** 2026-04-16 09:36:27.931862 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.931874 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.931887 | orchestrator | 2026-04-16 09:36:27.931899 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-04-16 09:36:27.931912 | orchestrator | Thursday 16 April 2026 09:36:06 +0000 (0:00:01.203) 0:03:27.339 ******** 2026-04-16 09:36:27.931965 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-16 09:36:27.931981 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.931992 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-04-16 09:36:27.932003 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.932014 | orchestrator | 2026-04-16 09:36:27.932024 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-04-16 09:36:27.932035 | orchestrator | Thursday 16 April 2026 09:36:10 +0000 (0:00:04.027) 0:03:31.367 ******** 2026-04-16 09:36:27.932046 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.932057 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.932068 | orchestrator | 2026-04-16 09:36:27.932078 | orchestrator | TASK [glance : Generating 'hostid' file for glance_api] ************************ 2026-04-16 09:36:27.932089 | orchestrator | Thursday 16 April 2026 09:36:14 +0000 (0:00:04.281) 0:03:35.648 ******** 2026-04-16 09:36:27.932100 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:36:27.932110 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:36:27.932121 | orchestrator | 2026-04-16 09:36:27.932131 | orchestrator | TASK [service-check-containers : glance | Check containers] ******************** 2026-04-16 09:36:27.932142 | orchestrator | Thursday 16 April 2026 09:36:18 +0000 (0:00:04.042) 0:03:39.691 ******** 2026-04-16 09:36:27.932157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:36:27.932204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:36:27.932227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-04-16 09:36:27.932240 | orchestrator | 2026-04-16 09:36:27.932252 | orchestrator | TASK [service-check-containers : glance | Notify handlers to restart containers] *** 2026-04-16 09:36:27.932264 | orchestrator | Thursday 16 April 2026 09:36:23 +0000 (0:00:04.931) 0:03:44.622 ******** 2026-04-16 09:36:27.932275 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:36:27.932316 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:36:27.932328 | orchestrator | } 2026-04-16 09:36:27.932339 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:36:27.932350 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:36:27.932360 | orchestrator | } 2026-04-16 09:36:27.932371 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:36:27.932382 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:36:27.932392 | orchestrator | } 2026-04-16 09:36:27.932403 | orchestrator | 2026-04-16 09:36:27.932414 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:36:27.932425 | orchestrator | Thursday 16 April 2026 09:36:24 +0000 (0:00:01.348) 0:03:45.971 ******** 2026-04-16 09:36:27.932452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:37:29.351605 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:37:29.351726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:37:29.351757 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:37:29.351779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-04-16 09:37:29.351837 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:37:29.351860 | orchestrator | 2026-04-16 09:37:29.351880 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-16 09:37:29.351913 | orchestrator | Thursday 16 April 2026 09:36:29 +0000 (0:00:04.394) 0:03:50.366 ******** 2026-04-16 09:37:29.351925 | orchestrator | 2026-04-16 09:37:29.351936 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-16 09:37:29.351947 | orchestrator | Thursday 16 April 2026 09:36:29 +0000 (0:00:00.433) 0:03:50.799 ******** 2026-04-16 09:37:29.351958 | orchestrator | 2026-04-16 09:37:29.351968 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-04-16 09:37:29.352000 | orchestrator | Thursday 16 April 2026 09:36:30 +0000 (0:00:00.415) 0:03:51.215 ******** 2026-04-16 09:37:29.352012 | orchestrator | 2026-04-16 09:37:29.352023 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-04-16 09:37:29.352034 | orchestrator | Thursday 16 April 2026 09:36:30 +0000 (0:00:00.811) 0:03:52.027 ******** 2026-04-16 09:37:29.352045 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:37:29.352056 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:37:29.352066 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:37:29.352077 | orchestrator | 2026-04-16 09:37:29.352088 | orchestrator | TASK [glance : Running Glance database contract container] ********************* 2026-04-16 09:37:29.352099 | orchestrator | Thursday 16 April 2026 09:37:07 +0000 (0:00:36.174) 0:04:28.201 ******** 2026-04-16 09:37:29.352110 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:37:29.352123 | orchestrator | 2026-04-16 09:37:29.352135 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-04-16 09:37:29.352148 | orchestrator | Thursday 16 April 2026 09:37:22 +0000 (0:00:15.674) 0:04:43.876 ******** 2026-04-16 09:37:29.352160 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:37:29.352173 | orchestrator | 2026-04-16 09:37:29.352185 | orchestrator | TASK [glance : Finish Glance upgrade] ****************************************** 2026-04-16 09:37:29.352197 | orchestrator | Thursday 16 April 2026 09:37:26 +0000 (0:00:03.167) 0:04:47.044 ******** 2026-04-16 09:37:29.352209 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:37:29.352221 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:37:29.352234 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:37:29.352277 | orchestrator | 2026-04-16 09:37:29.352292 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-04-16 09:37:29.352304 | orchestrator | Thursday 16 April 2026 09:37:27 +0000 (0:00:01.308) 0:04:48.353 ******** 2026-04-16 09:37:29.352317 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:37:29.352330 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:37:29.352342 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:37:29.352354 | orchestrator | 2026-04-16 09:37:29.352366 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:37:29.352380 | orchestrator | testbed-node-0 : ok=27  changed=11  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-16 09:37:29.352404 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-16 09:37:29.352417 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-04-16 09:37:29.352429 | orchestrator | 2026-04-16 09:37:29.352442 | orchestrator | 2026-04-16 09:37:29.352454 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:37:29.352466 | orchestrator | Thursday 16 April 2026 09:37:29 +0000 (0:00:01.679) 0:04:50.033 ******** 2026-04-16 09:37:29.352479 | orchestrator | =============================================================================== 2026-04-16 09:37:29.352491 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.18s 2026-04-16 09:37:29.352504 | orchestrator | glance : Running Glance database expand container ---------------------- 26.40s 2026-04-16 09:37:29.352515 | orchestrator | glance : Running Glance database contract container -------------------- 15.67s 2026-04-16 09:37:29.352525 | orchestrator | glance : Running Glance database migrate container --------------------- 14.94s 2026-04-16 09:37:29.352536 | orchestrator | glance : Stop glance service ------------------------------------------- 11.71s 2026-04-16 09:37:29.352546 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.82s 2026-04-16 09:37:29.352557 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.35s 2026-04-16 09:37:29.352568 | orchestrator | glance : Copying over config.json files for services -------------------- 5.07s 2026-04-16 09:37:29.352579 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.98s 2026-04-16 09:37:29.352589 | orchestrator | service-check-containers : glance | Check containers -------------------- 4.93s 2026-04-16 09:37:29.352600 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.92s 2026-04-16 09:37:29.352610 | orchestrator | glance : Copying over config.json files for services -------------------- 4.91s 2026-04-16 09:37:29.352621 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.59s 2026-04-16 09:37:29.352632 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.56s 2026-04-16 09:37:29.352643 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.45s 2026-04-16 09:37:29.352653 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.42s 2026-04-16 09:37:29.352664 | orchestrator | service-check-containers : Include tasks -------------------------------- 4.39s 2026-04-16 09:37:29.352675 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.28s 2026-04-16 09:37:29.352686 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.27s 2026-04-16 09:37:29.352697 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.15s 2026-04-16 09:37:29.537036 | orchestrator | + osism apply -a upgrade cinder 2026-04-16 09:37:30.786299 | orchestrator | 2026-04-16 09:37:30 | INFO  | Prepare task for execution of cinder. 2026-04-16 09:37:30.849234 | orchestrator | 2026-04-16 09:37:30 | INFO  | Task 2efd1206-e394-4c4e-9cd8-a7a0248233a1 (cinder) was prepared for execution. 2026-04-16 09:37:30.849371 | orchestrator | 2026-04-16 09:37:30 | INFO  | It takes a moment until task 2efd1206-e394-4c4e-9cd8-a7a0248233a1 (cinder) has been started and output is visible here. 2026-04-16 09:37:52.517798 | orchestrator | 2026-04-16 09:37:52.517920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:37:52.517932 | orchestrator | 2026-04-16 09:37:52.517939 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:37:52.517945 | orchestrator | Thursday 16 April 2026 09:37:35 +0000 (0:00:01.678) 0:00:01.678 ******** 2026-04-16 09:37:52.517954 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:37:52.517966 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:37:52.518073 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:37:52.518089 | orchestrator | 2026-04-16 09:37:52.518099 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:37:52.518109 | orchestrator | Thursday 16 April 2026 09:37:37 +0000 (0:00:01.643) 0:00:03.322 ******** 2026-04-16 09:37:52.518120 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-04-16 09:37:52.518130 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-04-16 09:37:52.518139 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-04-16 09:37:52.518149 | orchestrator | 2026-04-16 09:37:52.518158 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-04-16 09:37:52.518168 | orchestrator | 2026-04-16 09:37:52.518178 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:37:52.518187 | orchestrator | Thursday 16 April 2026 09:37:39 +0000 (0:00:01.933) 0:00:05.255 ******** 2026-04-16 09:37:52.518198 | orchestrator | included: /ansible/roles/cinder/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:37:52.518209 | orchestrator | 2026-04-16 09:37:52.518218 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:37:52.518228 | orchestrator | Thursday 16 April 2026 09:37:42 +0000 (0:00:02.971) 0:00:08.227 ******** 2026-04-16 09:37:52.518287 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:37:52.518302 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:37:52.518309 | orchestrator | included: /ansible/roles/cinder/tasks/config.yml for testbed-node-0 2026-04-16 09:37:52.518315 | orchestrator | 2026-04-16 09:37:52.518321 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-16 09:37:52.518327 | orchestrator | Thursday 16 April 2026 09:37:43 +0000 (0:00:01.404) 0:00:09.631 ******** 2026-04-16 09:37:52.518337 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:37:52.518348 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:37:52.518368 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:37:52.518402 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:37:52.518410 | orchestrator | 2026-04-16 09:37:52.518417 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:37:52.518424 | orchestrator | Thursday 16 April 2026 09:37:47 +0000 (0:00:03.397) 0:00:13.028 ******** 2026-04-16 09:37:52.518430 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:37:52.518437 | orchestrator | 2026-04-16 09:37:52.518443 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:37:52.518450 | orchestrator | Thursday 16 April 2026 09:37:48 +0000 (0:00:01.104) 0:00:14.132 ******** 2026-04-16 09:37:52.518456 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0 2026-04-16 09:37:52.518463 | orchestrator | 2026-04-16 09:37:52.518469 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-16 09:37:52.518476 | orchestrator | Thursday 16 April 2026 09:37:49 +0000 (0:00:01.427) 0:00:15.560 ******** 2026-04-16 09:37:52.518482 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-16 09:37:52.518489 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-16 09:37:52.518496 | orchestrator | 2026-04-16 09:37:52.518503 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-16 09:37:52.518509 | orchestrator | Thursday 16 April 2026 09:37:52 +0000 (0:00:02.553) 0:00:18.113 ******** 2026-04-16 09:37:52.518517 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:37:52.518525 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:37:52.518548 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:38:12.216064 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:38:12.216199 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:38:12.216357 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:38:12.216388 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:38:12.216465 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:38:12.216481 | orchestrator | 2026-04-16 09:38:12.216494 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-16 09:38:12.216507 | orchestrator | Thursday 16 April 2026 09:37:58 +0000 (0:00:06.166) 0:00:24.280 ******** 2026-04-16 09:38:12.216519 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:38:12.216531 | orchestrator | 2026-04-16 09:38:12.216543 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-16 09:38:12.216554 | orchestrator | Thursday 16 April 2026 09:38:00 +0000 (0:00:02.277) 0:00:26.557 ******** 2026-04-16 09:38:12.216567 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:38:12.216581 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-16 09:38:12.216595 | orchestrator | 2026-04-16 09:38:12.216607 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-16 09:38:12.216619 | orchestrator | Thursday 16 April 2026 09:38:03 +0000 (0:00:03.432) 0:00:29.990 ******** 2026-04-16 09:38:12.216632 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-16 09:38:12.216645 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-16 09:38:12.216657 | orchestrator | 2026-04-16 09:38:12.216670 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-16 09:38:12.216682 | orchestrator | Thursday 16 April 2026 09:38:05 +0000 (0:00:01.820) 0:00:31.812 ******** 2026-04-16 09:38:12.216694 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:38:12.216708 | orchestrator | 2026-04-16 09:38:12.216721 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-16 09:38:12.216739 | orchestrator | Thursday 16 April 2026 09:38:06 +0000 (0:00:01.099) 0:00:32.911 ******** 2026-04-16 09:38:12.216758 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:38:12.216778 | orchestrator | 2026-04-16 09:38:12.216796 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:38:12.216814 | orchestrator | Thursday 16 April 2026 09:38:07 +0000 (0:00:01.098) 0:00:34.009 ******** 2026-04-16 09:38:12.216832 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0 2026-04-16 09:38:12.216865 | orchestrator | 2026-04-16 09:38:12.216883 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-16 09:38:12.216902 | orchestrator | Thursday 16 April 2026 09:38:09 +0000 (0:00:01.471) 0:00:35.481 ******** 2026-04-16 09:38:12.216922 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:38:12.216952 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:12.216989 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:18.881926 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:18.882121 | orchestrator | 2026-04-16 09:38:18.882148 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-16 09:38:18.882166 | orchestrator | Thursday 16 April 2026 09:38:14 +0000 (0:00:04.765) 0:00:40.246 ******** 2026-04-16 09:38:18.882189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:38:18.882263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:38:18.882300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:38:18.882319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:38:18.882336 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:38:18.882354 | orchestrator | 2026-04-16 09:38:18.882392 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-16 09:38:18.882408 | orchestrator | Thursday 16 April 2026 09:38:15 +0000 (0:00:01.620) 0:00:41.867 ******** 2026-04-16 09:38:18.882426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:38:18.882455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:38:18.882473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:38:18.882496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:38:18.882512 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:38:18.882527 | orchestrator | 2026-04-16 09:38:18.882542 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-16 09:38:18.882558 | orchestrator | Thursday 16 April 2026 09:38:17 +0000 (0:00:01.619) 0:00:43.487 ******** 2026-04-16 09:38:18.882586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:38:45.463011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:45.463152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:45.463170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:45.463202 | orchestrator | 2026-04-16 09:38:45.463303 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-16 09:38:45.463317 | orchestrator | Thursday 16 April 2026 09:38:22 +0000 (0:00:05.181) 0:00:48.669 ******** 2026-04-16 09:38:45.463328 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-16 09:38:45.463340 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:38:45.463353 | orchestrator | 2026-04-16 09:38:45.463364 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-16 09:38:45.463375 | orchestrator | Thursday 16 April 2026 09:38:24 +0000 (0:00:01.511) 0:00:50.180 ******** 2026-04-16 09:38:45.463387 | orchestrator | included: service-uwsgi-config for testbed-node-0 2026-04-16 09:38:45.463398 | orchestrator | 2026-04-16 09:38:45.463409 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-16 09:38:45.463435 | orchestrator | Thursday 16 April 2026 09:38:25 +0000 (0:00:01.754) 0:00:51.935 ******** 2026-04-16 09:38:45.463446 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:38:45.463457 | orchestrator | 2026-04-16 09:38:45.463468 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-16 09:38:45.463479 | orchestrator | Thursday 16 April 2026 09:38:28 +0000 (0:00:02.512) 0:00:54.447 ******** 2026-04-16 09:38:45.463493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:38:45.463537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:45.463552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:45.463565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:38:45.463579 | orchestrator | 2026-04-16 09:38:45.463591 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-16 09:38:45.463603 | orchestrator | Thursday 16 April 2026 09:38:40 +0000 (0:00:11.718) 0:01:06.165 ******** 2026-04-16 09:38:45.463616 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:38:45.463628 | orchestrator | 2026-04-16 09:38:45.463640 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-16 09:38:45.463653 | orchestrator | Thursday 16 April 2026 09:38:42 +0000 (0:00:02.272) 0:01:08.438 ******** 2026-04-16 09:38:45.463665 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:38:45.463678 | orchestrator | 2026-04-16 09:38:45.463690 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-16 09:38:45.463702 | orchestrator | Thursday 16 April 2026 09:38:44 +0000 (0:00:02.492) 0:01:10.931 ******** 2026-04-16 09:38:45.463721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:38:45.463752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:39:29.133473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:39:29.133620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:39:29.133649 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:39:29.133673 | orchestrator | 2026-04-16 09:39:29.133693 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-16 09:39:29.133716 | orchestrator | Thursday 16 April 2026 09:38:46 +0000 (0:00:01.605) 0:01:12.536 ******** 2026-04-16 09:39:29.133734 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:39:29.133752 | orchestrator | 2026-04-16 09:39:29.133771 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-04-16 09:39:29.133789 | orchestrator | Thursday 16 April 2026 09:38:47 +0000 (0:00:01.451) 0:01:13.988 ******** 2026-04-16 09:39:29.133808 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:39:29.133827 | orchestrator | 2026-04-16 09:39:29.133846 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-04-16 09:39:29.133864 | orchestrator | Thursday 16 April 2026 09:39:27 +0000 (0:00:39.391) 0:01:53.379 ******** 2026-04-16 09:39:29.133907 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:39:29.133968 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:29.134015 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:39:29.134110 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:39:29.134132 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:29.134162 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:29.134266 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:29.134307 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:36.634814 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:36.634931 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:36.634948 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:36.635001 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:36.635015 | orchestrator | 2026-04-16 09:39:36.635028 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:39:36.635041 | orchestrator | Thursday 16 April 2026 09:39:30 +0000 (0:00:03.385) 0:01:56.765 ******** 2026-04-16 09:39:36.635053 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:39:36.635065 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:39:36.635076 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:39:36.635087 | orchestrator | 2026-04-16 09:39:36.635098 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:39:36.635109 | orchestrator | Thursday 16 April 2026 09:39:32 +0000 (0:00:01.331) 0:01:58.096 ******** 2026-04-16 09:39:36.635120 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:39:36.635131 | orchestrator | 2026-04-16 09:39:36.635142 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-04-16 09:39:36.635153 | orchestrator | Thursday 16 April 2026 09:39:33 +0000 (0:00:01.427) 0:01:59.523 ******** 2026-04-16 09:39:36.635164 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-16 09:39:36.635176 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-16 09:39:36.635219 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-16 09:39:36.635230 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-16 09:39:36.635241 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-16 09:39:36.635251 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-16 09:39:36.635262 | orchestrator | 2026-04-16 09:39:36.635274 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-04-16 09:39:36.635305 | orchestrator | Thursday 16 April 2026 09:39:36 +0000 (0:00:02.654) 0:02:02.178 ******** 2026-04-16 09:39:36.635322 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:39:36.635342 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:39:36.635364 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:39:36.635378 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:39:36.635401 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:39:37.888351 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:39:37.888529 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:39:37.888563 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:39:37.888587 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:39:37.888636 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:39:37.888679 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}])  2026-04-16 09:39:37.888701 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}])  2026-04-16 09:39:37.888721 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:39:37.888755 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:39:41.351582 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:39:41.351711 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:39:41.351731 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:39:41.351737 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:39:41.351764 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:39:41.351778 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:39:41.351787 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}]) 2026-04-16 09:39:41.351794 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:39:41.351800 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:39:41.351810 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}]) 2026-04-16 09:39:57.646985 | orchestrator | 2026-04-16 09:39:57.647105 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-04-16 09:39:57.647122 | orchestrator | Thursday 16 April 2026 09:39:42 +0000 (0:00:06.283) 0:02:08.461 ******** 2026-04-16 09:39:57.647134 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:39:57.647145 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:39:57.647155 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:39:57.647165 | orchestrator | 2026-04-16 09:39:57.647225 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-04-16 09:39:57.647237 | orchestrator | Thursday 16 April 2026 09:39:45 +0000 (0:00:02.714) 0:02:11.176 ******** 2026-04-16 09:39:57.647247 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:39:57.647273 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:39:57.647283 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder', 'pool': 'volumes', 'enabled': True}) 2026-04-16 09:39:57.647294 | orchestrator | ok: [testbed-node-1] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-16 09:39:57.647305 | orchestrator | ok: [testbed-node-0] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-16 09:39:57.647315 | orchestrator | ok: [testbed-node-2] => (item={'name': 'rbd-1', 'backend_name': 'rbd-1', 'cluster': 'ceph', 'user': 'cinder-backup', 'pool': 'backups', 'enabled': True}) 2026-04-16 09:39:57.647325 | orchestrator | 2026-04-16 09:39:57.647335 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-04-16 09:39:57.647345 | orchestrator | Thursday 16 April 2026 09:39:48 +0000 (0:00:03.686) 0:02:14.862 ******** 2026-04-16 09:39:57.647355 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-04-16 09:39:57.647365 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-04-16 09:39:57.647375 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-04-16 09:39:57.647385 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-04-16 09:39:57.647394 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-04-16 09:39:57.647404 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-04-16 09:39:57.647414 | orchestrator | 2026-04-16 09:39:57.647423 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-04-16 09:39:57.647434 | orchestrator | Thursday 16 April 2026 09:39:50 +0000 (0:00:02.012) 0:02:16.875 ******** 2026-04-16 09:39:57.647444 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:39:57.647455 | orchestrator | 2026-04-16 09:39:57.647464 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-04-16 09:39:57.647474 | orchestrator | Thursday 16 April 2026 09:39:51 +0000 (0:00:01.088) 0:02:17.964 ******** 2026-04-16 09:39:57.647484 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:39:57.647496 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:39:57.647507 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:39:57.647538 | orchestrator | 2026-04-16 09:39:57.647550 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-04-16 09:39:57.647561 | orchestrator | Thursday 16 April 2026 09:39:53 +0000 (0:00:01.558) 0:02:19.523 ******** 2026-04-16 09:39:57.647573 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:39:57.647583 | orchestrator | 2026-04-16 09:39:57.647595 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-04-16 09:39:57.647606 | orchestrator | Thursday 16 April 2026 09:39:54 +0000 (0:00:01.300) 0:02:20.824 ******** 2026-04-16 09:39:57.647640 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:39:57.647665 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:39:57.647679 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:39:57.647692 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:57.647716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:57.647734 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:39:57.647770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:00.653798 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:00.653877 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:00.653903 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:00.653910 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:00.653916 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:00.653922 | orchestrator | 2026-04-16 09:40:00.653928 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-04-16 09:40:00.653935 | orchestrator | Thursday 16 April 2026 09:39:59 +0000 (0:00:05.185) 0:02:26.010 ******** 2026-04-16 09:40:00.653960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:00.653968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:00.653982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:00.653989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:00.653995 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:40:00.654002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:00.654055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324699 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:40:02.324767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:02.324789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324888 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:40:02.324902 | orchestrator | 2026-04-16 09:40:02.324916 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-04-16 09:40:02.324931 | orchestrator | Thursday 16 April 2026 09:40:01 +0000 (0:00:01.858) 0:02:27.868 ******** 2026-04-16 09:40:02.324947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:02.324963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.324991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:02.325014 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:40:02.325040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:05.220996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:05.221081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:05.221091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:05.221099 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:40:05.221119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:05.221126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:05.221159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:05.221165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:05.221223 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:40:05.221230 | orchestrator | 2026-04-16 09:40:05.221236 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-04-16 09:40:05.221243 | orchestrator | Thursday 16 April 2026 09:40:03 +0000 (0:00:01.632) 0:02:29.501 ******** 2026-04-16 09:40:05.221249 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:05.221263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:05.221287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:18.458671 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458825 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:18.458839 | orchestrator | 2026-04-16 09:40:18.458845 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-04-16 09:40:18.458850 | orchestrator | Thursday 16 April 2026 09:40:09 +0000 (0:00:05.716) 0:02:35.217 ******** 2026-04-16 09:40:18.458859 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-16 09:40:18.458865 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:40:18.458870 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-16 09:40:18.458875 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:40:18.458882 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2026-04-16 09:40:18.458886 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:40:18.458890 | orchestrator | 2026-04-16 09:40:18.458895 | orchestrator | TASK [Configure uWSGI for Cinder] ********************************************** 2026-04-16 09:40:18.458899 | orchestrator | Thursday 16 April 2026 09:40:10 +0000 (0:00:01.637) 0:02:36.854 ******** 2026-04-16 09:40:18.458903 | orchestrator | included: service-uwsgi-config for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:40:18.458908 | orchestrator | 2026-04-16 09:40:18.458912 | orchestrator | TASK [service-uwsgi-config : Copying over cinder-api uWSGI config] ************* 2026-04-16 09:40:18.458916 | orchestrator | Thursday 16 April 2026 09:40:12 +0000 (0:00:01.645) 0:02:38.500 ******** 2026-04-16 09:40:18.458920 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:40:18.458925 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:40:18.458929 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:40:18.458933 | orchestrator | 2026-04-16 09:40:18.458937 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-04-16 09:40:18.458942 | orchestrator | Thursday 16 April 2026 09:40:15 +0000 (0:00:03.108) 0:02:41.609 ******** 2026-04-16 09:40:18.458950 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:26.900812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:26.900937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:26.901009 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901082 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901186 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:26.901231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:34.145007 | orchestrator | 2026-04-16 09:40:34.145123 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-04-16 09:40:34.145140 | orchestrator | Thursday 16 April 2026 09:40:27 +0000 (0:00:12.410) 0:02:54.019 ******** 2026-04-16 09:40:34.145152 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:40:34.145200 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:40:34.145212 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:40:34.145223 | orchestrator | 2026-04-16 09:40:34.145235 | orchestrator | TASK [cinder : Generating 'hostid' file for cinder_volume] ********************* 2026-04-16 09:40:34.145246 | orchestrator | Thursday 16 April 2026 09:40:30 +0000 (0:00:02.750) 0:02:56.770 ******** 2026-04-16 09:40:34.145258 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:40:34.145269 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:40:34.145311 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:40:34.145331 | orchestrator | 2026-04-16 09:40:34.145351 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-04-16 09:40:34.145370 | orchestrator | Thursday 16 April 2026 09:40:33 +0000 (0:00:02.824) 0:02:59.594 ******** 2026-04-16 09:40:34.145396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:34.145439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:34.145462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:34.145483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:34.145505 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:40:34.145555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:34.145585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:34.145605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:34.145620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:34.145632 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:40:34.145646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:34.145668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:40.086316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:40.086469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:40.086498 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:40:40.086521 | orchestrator | 2026-04-16 09:40:40.086540 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-04-16 09:40:40.086584 | orchestrator | Thursday 16 April 2026 09:40:35 +0000 (0:00:01.640) 0:03:01.235 ******** 2026-04-16 09:40:40.086626 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:40:40.086661 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:40:40.086681 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:40:40.086699 | orchestrator | 2026-04-16 09:40:40.086718 | orchestrator | TASK [service-check-containers : cinder | Check containers] ******************** 2026-04-16 09:40:40.086736 | orchestrator | Thursday 16 April 2026 09:40:36 +0000 (0:00:01.664) 0:03:02.899 ******** 2026-04-16 09:40:40.086774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:40.086799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:40.086878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:40:40.086909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:40.086930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:40.086948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:40.086968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:40.087009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:44.110476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:44.110574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:44.110584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:44.110590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:40:44.110612 | orchestrator | 2026-04-16 09:40:44.110619 | orchestrator | TASK [service-check-containers : cinder | Notify handlers to restart containers] *** 2026-04-16 09:40:44.110625 | orchestrator | Thursday 16 April 2026 09:40:42 +0000 (0:00:05.374) 0:03:08.274 ******** 2026-04-16 09:40:44.110631 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:40:44.110637 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:40:44.110642 | orchestrator | } 2026-04-16 09:40:44.110647 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:40:44.110652 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:40:44.110656 | orchestrator | } 2026-04-16 09:40:44.110661 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:40:44.110665 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:40:44.110670 | orchestrator | } 2026-04-16 09:40:44.110674 | orchestrator | 2026-04-16 09:40:44.110679 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:40:44.110684 | orchestrator | Thursday 16 April 2026 09:40:43 +0000 (0:00:01.381) 0:03:09.655 ******** 2026-04-16 09:40:44.110704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:44.110712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:40:44.110720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:40:44.110725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:40:44.110735 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:40:44.110741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:40:44.110751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:43:10.344994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:43:10.345136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:43:10.345152 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:43:10.345168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:43:10.345200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 09:43:10.345211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-04-16 09:43:10.345238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-04-16 09:43:10.345248 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:43:10.345258 | orchestrator | 2026-04-16 09:43:10.345269 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-16 09:43:10.345280 | orchestrator | Thursday 16 April 2026 09:40:45 +0000 (0:00:01.706) 0:03:11.362 ******** 2026-04-16 09:43:10.345290 | orchestrator | 2026-04-16 09:43:10.345299 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-16 09:43:10.345308 | orchestrator | Thursday 16 April 2026 09:40:45 +0000 (0:00:00.424) 0:03:11.786 ******** 2026-04-16 09:43:10.345317 | orchestrator | 2026-04-16 09:43:10.345326 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-04-16 09:43:10.345335 | orchestrator | Thursday 16 April 2026 09:40:46 +0000 (0:00:00.600) 0:03:12.387 ******** 2026-04-16 09:43:10.345345 | orchestrator | 2026-04-16 09:43:10.345358 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-04-16 09:43:10.345368 | orchestrator | Thursday 16 April 2026 09:40:47 +0000 (0:00:00.778) 0:03:13.166 ******** 2026-04-16 09:43:10.345385 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:43:10.345394 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:43:10.345403 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:43:10.345412 | orchestrator | 2026-04-16 09:43:10.345421 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-04-16 09:43:10.345430 | orchestrator | Thursday 16 April 2026 09:41:19 +0000 (0:00:32.307) 0:03:45.473 ******** 2026-04-16 09:43:10.345439 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:43:10.345448 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:43:10.345458 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:43:10.345466 | orchestrator | 2026-04-16 09:43:10.345475 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-04-16 09:43:10.345484 | orchestrator | Thursday 16 April 2026 09:41:37 +0000 (0:00:18.116) 0:04:03.589 ******** 2026-04-16 09:43:10.345494 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:43:10.345502 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:43:10.345511 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:43:10.345520 | orchestrator | 2026-04-16 09:43:10.345529 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-04-16 09:43:10.345538 | orchestrator | Thursday 16 April 2026 09:42:18 +0000 (0:00:41.314) 0:04:44.903 ******** 2026-04-16 09:43:10.345547 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:43:10.345557 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:43:10.345566 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:43:10.345575 | orchestrator | 2026-04-16 09:43:10.345585 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-04-16 09:43:10.345596 | orchestrator | Thursday 16 April 2026 09:42:32 +0000 (0:00:13.913) 0:04:58.817 ******** 2026-04-16 09:43:10.345605 | orchestrator | Pausing for 30 seconds 2026-04-16 09:43:10.345615 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:43:10.345625 | orchestrator | 2026-04-16 09:43:10.345634 | orchestrator | TASK [cinder : Reload cinder services to remove RPC version pin] *************** 2026-04-16 09:43:10.345644 | orchestrator | Thursday 16 April 2026 09:43:04 +0000 (0:00:31.456) 0:05:30.273 ******** 2026-04-16 09:43:10.345654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:43:10.345671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:43:54.347383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:43:54.347505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-04-16 09:43:54.347730 | orchestrator | 2026-04-16 09:43:54.347744 | orchestrator | TASK [cinder : Running Cinder online schema migration] ************************* 2026-04-16 09:43:54.347756 | orchestrator | Thursday 16 April 2026 09:43:38 +0000 (0:00:34.124) 0:06:04.398 ******** 2026-04-16 09:43:54.347767 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:43:54.347779 | orchestrator | 2026-04-16 09:43:54.347790 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:43:54.347812 | orchestrator | testbed-node-0 : ok=44  changed=13  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 09:43:54.347825 | orchestrator | testbed-node-1 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 09:43:54.347836 | orchestrator | testbed-node-2 : ok=25  changed=11  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 09:43:54.347846 | orchestrator | 2026-04-16 09:43:54.347857 | orchestrator | 2026-04-16 09:43:54.347868 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:43:54.347888 | orchestrator | Thursday 16 April 2026 09:43:54 +0000 (0:00:15.958) 0:06:20.356 ******** 2026-04-16 09:43:54.741987 | orchestrator | =============================================================================== 2026-04-16 09:43:54.742176 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 41.31s 2026-04-16 09:43:54.742191 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 39.39s 2026-04-16 09:43:54.742201 | orchestrator | cinder : Reload cinder services to remove RPC version pin -------------- 34.12s 2026-04-16 09:43:54.742211 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.31s 2026-04-16 09:43:54.742221 | orchestrator | cinder : Wait for cinder services to update service versions ----------- 31.46s 2026-04-16 09:43:54.742248 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 18.12s 2026-04-16 09:43:54.742258 | orchestrator | cinder : Running Cinder online schema migration ------------------------ 15.96s 2026-04-16 09:43:54.742267 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 13.91s 2026-04-16 09:43:54.742277 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.41s 2026-04-16 09:43:54.742287 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.72s 2026-04-16 09:43:54.742296 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.28s 2026-04-16 09:43:54.742306 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.17s 2026-04-16 09:43:54.742316 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.72s 2026-04-16 09:43:54.742325 | orchestrator | service-check-containers : cinder | Check containers -------------------- 5.37s 2026-04-16 09:43:54.742335 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.19s 2026-04-16 09:43:54.742344 | orchestrator | cinder : Copying over config.json files for services -------------------- 5.18s 2026-04-16 09:43:54.742353 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.77s 2026-04-16 09:43:54.742363 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.69s 2026-04-16 09:43:54.742372 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.43s 2026-04-16 09:43:54.742382 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.40s 2026-04-16 09:43:54.906323 | orchestrator | + osism apply -a upgrade barbican 2026-04-16 09:43:56.264467 | orchestrator | 2026-04-16 09:43:56 | INFO  | Prepare task for execution of barbican. 2026-04-16 09:43:56.329867 | orchestrator | 2026-04-16 09:43:56 | INFO  | Task fe33e70f-87ec-4af7-bc04-f3b01fe36fd7 (barbican) was prepared for execution. 2026-04-16 09:43:56.329986 | orchestrator | 2026-04-16 09:43:56 | INFO  | It takes a moment until task fe33e70f-87ec-4af7-bc04-f3b01fe36fd7 (barbican) has been started and output is visible here. 2026-04-16 09:44:05.493264 | orchestrator | 2026-04-16 09:44:05.493415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:44:05.493446 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:44:05.493467 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:44:05.493542 | orchestrator | 2026-04-16 09:44:05.493560 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:44:05.493577 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:44:05.493595 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:44:05.493630 | orchestrator | Thursday 16 April 2026 09:44:01 +0000 (0:00:01.492) 0:00:01.492 ******** 2026-04-16 09:44:05.493647 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:44:05.493665 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:44:05.493682 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:44:05.493701 | orchestrator | 2026-04-16 09:44:05.493719 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:44:05.493739 | orchestrator | Thursday 16 April 2026 09:44:01 +0000 (0:00:00.835) 0:00:02.328 ******** 2026-04-16 09:44:05.493759 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-04-16 09:44:05.493778 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-04-16 09:44:05.493797 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-04-16 09:44:05.493810 | orchestrator | 2026-04-16 09:44:05.493824 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-04-16 09:44:05.493836 | orchestrator | 2026-04-16 09:44:05.493849 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-16 09:44:05.493862 | orchestrator | Thursday 16 April 2026 09:44:02 +0000 (0:00:00.597) 0:00:02.925 ******** 2026-04-16 09:44:05.493875 | orchestrator | included: /ansible/roles/barbican/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:44:05.493888 | orchestrator | 2026-04-16 09:44:05.493900 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-04-16 09:44:05.493913 | orchestrator | Thursday 16 April 2026 09:44:03 +0000 (0:00:01.017) 0:00:03.943 ******** 2026-04-16 09:44:05.493947 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:05.493966 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:05.494119 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:05.494142 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:05.494155 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:05.494174 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:05.494187 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:05.494207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:05.494227 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:10.803250 | orchestrator | 2026-04-16 09:44:10.803339 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-04-16 09:44:10.803351 | orchestrator | Thursday 16 April 2026 09:44:05 +0000 (0:00:02.037) 0:00:05.981 ******** 2026-04-16 09:44:10.803359 | orchestrator | ok: [testbed-node-0] => (item=barbican-api/vassals) 2026-04-16 09:44:10.803366 | orchestrator | ok: [testbed-node-2] => (item=barbican-api/vassals) 2026-04-16 09:44:10.803373 | orchestrator | ok: [testbed-node-1] => (item=barbican-api/vassals) 2026-04-16 09:44:10.803379 | orchestrator | 2026-04-16 09:44:10.803386 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-04-16 09:44:10.803392 | orchestrator | Thursday 16 April 2026 09:44:06 +0000 (0:00:01.028) 0:00:07.009 ******** 2026-04-16 09:44:10.803398 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:10.803406 | orchestrator | 2026-04-16 09:44:10.803412 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-04-16 09:44:10.803419 | orchestrator | Thursday 16 April 2026 09:44:06 +0000 (0:00:00.111) 0:00:07.121 ******** 2026-04-16 09:44:10.803425 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:10.803431 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:44:10.803439 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:44:10.803449 | orchestrator | 2026-04-16 09:44:10.803458 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-04-16 09:44:10.803468 | orchestrator | Thursday 16 April 2026 09:44:07 +0000 (0:00:00.346) 0:00:07.467 ******** 2026-04-16 09:44:10.803479 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:44:10.803490 | orchestrator | 2026-04-16 09:44:10.803505 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-04-16 09:44:10.803518 | orchestrator | Thursday 16 April 2026 09:44:07 +0000 (0:00:00.735) 0:00:08.202 ******** 2026-04-16 09:44:10.803550 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:10.803587 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:10.803621 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:10.803633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:10.803645 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:10.803662 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:10.803680 | orchestrator | ok: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:10.803693 | orchestrator | ok: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:10.803712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:11.756371 | orchestrator | 2026-04-16 09:44:11.756500 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-04-16 09:44:11.756531 | orchestrator | Thursday 16 April 2026 09:44:10 +0000 (0:00:03.094) 0:00:11.297 ******** 2026-04-16 09:44:11.756559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:11.756608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:11.756675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:11.756693 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:11.756707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:11.756743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:11.756756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:11.756767 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:44:11.756786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:11.756806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:11.756818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:11.756831 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:44:11.756842 | orchestrator | 2026-04-16 09:44:11.756854 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-04-16 09:44:11.756865 | orchestrator | Thursday 16 April 2026 09:44:11 +0000 (0:00:00.616) 0:00:11.913 ******** 2026-04-16 09:44:11.756885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:13.449450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:13.449566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:13.449579 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:13.449604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:13.449614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:13.449622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:13.449631 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:44:13.449655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:13.449675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:13.449684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:13.449692 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:44:13.449700 | orchestrator | 2026-04-16 09:44:13.449709 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-04-16 09:44:13.449719 | orchestrator | Thursday 16 April 2026 09:44:12 +0000 (0:00:00.668) 0:00:12.582 ******** 2026-04-16 09:44:13.449728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:13.449744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:21.528693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:21.529761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:21.529840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:21.529853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:21.529864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:21.529897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:21.529926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:21.529937 | orchestrator | 2026-04-16 09:44:21.529948 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-04-16 09:44:21.529959 | orchestrator | Thursday 16 April 2026 09:44:15 +0000 (0:00:03.555) 0:00:16.137 ******** 2026-04-16 09:44:21.529967 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:44:21.529977 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:44:21.529992 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:44:21.530001 | orchestrator | 2026-04-16 09:44:21.530010 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-04-16 09:44:21.530093 | orchestrator | Thursday 16 April 2026 09:44:17 +0000 (0:00:01.454) 0:00:17.592 ******** 2026-04-16 09:44:21.530103 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:44:21.530113 | orchestrator | 2026-04-16 09:44:21.530122 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-04-16 09:44:21.530131 | orchestrator | Thursday 16 April 2026 09:44:18 +0000 (0:00:01.206) 0:00:18.799 ******** 2026-04-16 09:44:21.530140 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:21.530149 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:44:21.530158 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:44:21.530166 | orchestrator | 2026-04-16 09:44:21.530175 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-04-16 09:44:21.530184 | orchestrator | Thursday 16 April 2026 09:44:18 +0000 (0:00:00.560) 0:00:19.359 ******** 2026-04-16 09:44:21.530195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:21.530207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:21.530235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:26.166360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:26.166495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:26.166524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:26.166543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:26.166592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:26.166614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:26.166672 | orchestrator | 2026-04-16 09:44:26.166695 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-04-16 09:44:26.166738 | orchestrator | Thursday 16 April 2026 09:44:25 +0000 (0:00:06.637) 0:00:25.997 ******** 2026-04-16 09:44:26.166763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:26.166778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:26.166790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:26.166812 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:26.166825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:26.166846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:28.939190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:28.939269 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:44:28.939281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:28.939288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:28.939308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:28.939313 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:44:28.939318 | orchestrator | 2026-04-16 09:44:28.939324 | orchestrator | TASK [service-check-containers : barbican | Check containers] ****************** 2026-04-16 09:44:28.939330 | orchestrator | Thursday 16 April 2026 09:44:26 +0000 (0:00:01.167) 0:00:27.164 ******** 2026-04-16 09:44:28.939350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:28.939356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:28.939362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:44:28.939371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:28.939377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:28.939389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:31.002363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:31.002467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:31.002537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:44:31.002552 | orchestrator | 2026-04-16 09:44:31.002566 | orchestrator | TASK [service-check-containers : barbican | Notify handlers to restart containers] *** 2026-04-16 09:44:31.002579 | orchestrator | Thursday 16 April 2026 09:44:29 +0000 (0:00:03.212) 0:00:30.377 ******** 2026-04-16 09:44:31.002592 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:44:31.002604 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:44:31.002615 | orchestrator | } 2026-04-16 09:44:31.002627 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:44:31.002637 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:44:31.002648 | orchestrator | } 2026-04-16 09:44:31.002659 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:44:31.002669 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:44:31.002680 | orchestrator | } 2026-04-16 09:44:31.002691 | orchestrator | 2026-04-16 09:44:31.002702 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:44:31.002713 | orchestrator | Thursday 16 April 2026 09:44:30 +0000 (0:00:00.342) 0:00:30.720 ******** 2026-04-16 09:44:31.002727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:31.002772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:31.002787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:31.002806 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:44:31.002818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:44:31.002830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:44:31.002842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:44:31.002853 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:44:31.002877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:22.942782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:47:22.942952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:22.943044 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:47:22.943066 | orchestrator | 2026-04-16 09:47:22.943085 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-04-16 09:47:22.943104 | orchestrator | Thursday 16 April 2026 09:44:31 +0000 (0:00:01.559) 0:00:32.280 ******** 2026-04-16 09:47:22.943120 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:47:22.943136 | orchestrator | 2026-04-16 09:47:22.943153 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-16 09:47:22.943169 | orchestrator | Thursday 16 April 2026 09:44:45 +0000 (0:00:13.183) 0:00:45.463 ******** 2026-04-16 09:47:22.943186 | orchestrator | 2026-04-16 09:47:22.943203 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-16 09:47:22.943219 | orchestrator | Thursday 16 April 2026 09:44:45 +0000 (0:00:00.078) 0:00:45.542 ******** 2026-04-16 09:47:22.943235 | orchestrator | 2026-04-16 09:47:22.943252 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-04-16 09:47:22.943268 | orchestrator | Thursday 16 April 2026 09:44:45 +0000 (0:00:00.074) 0:00:45.616 ******** 2026-04-16 09:47:22.943285 | orchestrator | 2026-04-16 09:47:22.943302 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-04-16 09:47:22.943319 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 09:47:22.943333 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 09:47:22.943359 | orchestrator | Thursday 16 April 2026 09:44:45 +0000 (0:00:00.077) 0:00:45.694 ******** 2026-04-16 09:47:22.943373 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:47:22.943387 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:47:22.943400 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:47:22.943414 | orchestrator | 2026-04-16 09:47:22.943428 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-04-16 09:47:22.943441 | orchestrator | Thursday 16 April 2026 09:46:58 +0000 (0:02:13.220) 0:02:58.914 ******** 2026-04-16 09:47:22.943455 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:47:22.943469 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:47:22.943483 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:47:22.943497 | orchestrator | 2026-04-16 09:47:22.943511 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-04-16 09:47:22.943525 | orchestrator | Thursday 16 April 2026 09:47:10 +0000 (0:00:11.858) 0:03:10.773 ******** 2026-04-16 09:47:22.943538 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:47:22.943552 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:47:22.943566 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:47:22.943580 | orchestrator | 2026-04-16 09:47:22.943594 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:47:22.943621 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 09:47:22.943635 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 09:47:22.943649 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 09:47:22.943662 | orchestrator | 2026-04-16 09:47:22.943674 | orchestrator | 2026-04-16 09:47:22.943688 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:47:22.943717 | orchestrator | Thursday 16 April 2026 09:47:22 +0000 (0:00:12.255) 0:03:23.028 ******** 2026-04-16 09:47:22.943732 | orchestrator | =============================================================================== 2026-04-16 09:47:22.943746 | orchestrator | barbican : Restart barbican-api container ----------------------------- 133.22s 2026-04-16 09:47:22.943759 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.18s 2026-04-16 09:47:22.943793 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.26s 2026-04-16 09:47:22.943809 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.86s 2026-04-16 09:47:22.943823 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.64s 2026-04-16 09:47:22.943836 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.56s 2026-04-16 09:47:22.943849 | orchestrator | service-check-containers : barbican | Check containers ------------------ 3.21s 2026-04-16 09:47:22.943863 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.09s 2026-04-16 09:47:22.943876 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.04s 2026-04-16 09:47:22.943890 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.56s 2026-04-16 09:47:22.943903 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.45s 2026-04-16 09:47:22.943917 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.21s 2026-04-16 09:47:22.943931 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.17s 2026-04-16 09:47:22.943944 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.03s 2026-04-16 09:47:22.943959 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.02s 2026-04-16 09:47:22.944026 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-04-16 09:47:22.944041 | orchestrator | barbican : include_tasks ------------------------------------------------ 0.74s 2026-04-16 09:47:22.944054 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 0.67s 2026-04-16 09:47:22.944067 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 0.62s 2026-04-16 09:47:22.944080 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-04-16 09:47:23.113368 | orchestrator | + osism apply -a upgrade designate 2026-04-16 09:47:24.364597 | orchestrator | 2026-04-16 09:47:24 | INFO  | Prepare task for execution of designate. 2026-04-16 09:47:24.428282 | orchestrator | 2026-04-16 09:47:24 | INFO  | Task fceb48d5-fc08-4fe1-be30-50a1e8ca2777 (designate) was prepared for execution. 2026-04-16 09:47:24.428381 | orchestrator | 2026-04-16 09:47:24 | INFO  | It takes a moment until task fceb48d5-fc08-4fe1-be30-50a1e8ca2777 (designate) has been started and output is visible here. 2026-04-16 09:47:37.657923 | orchestrator | 2026-04-16 09:47:37.658128 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:47:37.658144 | orchestrator | 2026-04-16 09:47:37.658152 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:47:37.658161 | orchestrator | Thursday 16 April 2026 09:47:29 +0000 (0:00:01.496) 0:00:01.496 ******** 2026-04-16 09:47:37.658188 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:47:37.658197 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:47:37.658205 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:47:37.658212 | orchestrator | 2026-04-16 09:47:37.658219 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:47:37.658226 | orchestrator | Thursday 16 April 2026 09:47:30 +0000 (0:00:01.582) 0:00:03.079 ******** 2026-04-16 09:47:37.658234 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-04-16 09:47:37.658242 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-04-16 09:47:37.658249 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-04-16 09:47:37.658256 | orchestrator | 2026-04-16 09:47:37.658263 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-04-16 09:47:37.658270 | orchestrator | 2026-04-16 09:47:37.658278 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 09:47:37.658285 | orchestrator | Thursday 16 April 2026 09:47:32 +0000 (0:00:01.680) 0:00:04.759 ******** 2026-04-16 09:47:37.658292 | orchestrator | included: /ansible/roles/designate/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:47:37.658299 | orchestrator | 2026-04-16 09:47:37.658307 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-04-16 09:47:37.658314 | orchestrator | Thursday 16 April 2026 09:47:35 +0000 (0:00:02.912) 0:00:07.671 ******** 2026-04-16 09:47:37.658336 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:37.658349 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:37.658373 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:37.658391 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:37.658400 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:37.658412 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:37.658420 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:37.658428 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:37.658435 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:37.658453 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.066870 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067051 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067089 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067101 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067113 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067160 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067205 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067221 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:45.067232 | orchestrator | 2026-04-16 09:47:45.067244 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-04-16 09:47:45.067256 | orchestrator | Thursday 16 April 2026 09:47:39 +0000 (0:00:04.223) 0:00:11.894 ******** 2026-04-16 09:47:45.067267 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:47:45.067278 | orchestrator | 2026-04-16 09:47:45.067288 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-04-16 09:47:45.067297 | orchestrator | Thursday 16 April 2026 09:47:40 +0000 (0:00:01.102) 0:00:12.997 ******** 2026-04-16 09:47:45.067308 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:47:45.067324 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:47:45.067358 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:47:45.067376 | orchestrator | 2026-04-16 09:47:45.067392 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 09:47:45.067408 | orchestrator | Thursday 16 April 2026 09:47:41 +0000 (0:00:01.292) 0:00:14.290 ******** 2026-04-16 09:47:45.067425 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:47:45.067441 | orchestrator | 2026-04-16 09:47:45.067458 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-04-16 09:47:45.067476 | orchestrator | Thursday 16 April 2026 09:47:43 +0000 (0:00:01.717) 0:00:16.008 ******** 2026-04-16 09:47:45.067497 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:45.067533 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:45.067565 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:49.091270 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091412 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091456 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091467 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091483 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091509 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091523 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091532 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091545 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091554 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091562 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:49.091579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:51.325104 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:51.325205 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:51.325240 | orchestrator | 2026-04-16 09:47:51.325253 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-04-16 09:47:51.325265 | orchestrator | Thursday 16 April 2026 09:47:50 +0000 (0:00:06.753) 0:00:22.762 ******** 2026-04-16 09:47:51.325276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:51.325290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:47:51.325300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:51.325332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:47:51.325351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:51.325361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:47:51.325370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:47:51.325379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:47:51.325397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598246 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:47:53.598256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598310 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:47:53.598319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:53.598336 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:47:53.598344 | orchestrator | 2026-04-16 09:47:53.598353 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-04-16 09:47:53.598363 | orchestrator | Thursday 16 April 2026 09:47:52 +0000 (0:00:02.355) 0:00:25.118 ******** 2026-04-16 09:47:53.598372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:53.598384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:53.598403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:47:54.037351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:47:54.037446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:47:54.037485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:47:54.037580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037601 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:47:54.037611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:54.037636 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:47:54.037650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:47:58.732103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:47:58.732198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:47:58.732212 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:47:58.732220 | orchestrator | 2026-04-16 09:47:58.732226 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-04-16 09:47:58.732233 | orchestrator | Thursday 16 April 2026 09:47:55 +0000 (0:00:02.578) 0:00:27.696 ******** 2026-04-16 09:47:58.732240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:58.732265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:58.732284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:47:58.732326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:58.732334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:58.732339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:47:58.732350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:58.732358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:47:58.732369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:05.730851 | orchestrator | 2026-04-16 09:48:05.730864 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-04-16 09:48:05.730877 | orchestrator | Thursday 16 April 2026 09:48:02 +0000 (0:00:07.352) 0:00:35.048 ******** 2026-04-16 09:48:05.730890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:48:05.730911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:48:05.730932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:48:16.454282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:16.454880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:27.828536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:27.828715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:27.828748 | orchestrator | 2026-04-16 09:48:27.828771 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-04-16 09:48:27.828793 | orchestrator | Thursday 16 April 2026 09:48:18 +0000 (0:00:15.463) 0:00:50.511 ******** 2026-04-16 09:48:27.828812 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-16 09:48:27.828833 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-16 09:48:27.828851 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-04-16 09:48:27.828870 | orchestrator | 2026-04-16 09:48:27.828882 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-04-16 09:48:27.828892 | orchestrator | Thursday 16 April 2026 09:48:22 +0000 (0:00:04.540) 0:00:55.052 ******** 2026-04-16 09:48:27.828903 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-16 09:48:27.828914 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-16 09:48:27.828924 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-04-16 09:48:27.828964 | orchestrator | 2026-04-16 09:48:27.828976 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-04-16 09:48:27.828987 | orchestrator | Thursday 16 April 2026 09:48:26 +0000 (0:00:03.512) 0:00:58.565 ******** 2026-04-16 09:48:27.829017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:27.829055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:27.829082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:27.829098 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:27.829118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:27.829132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:27.829145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:27.829173 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:30.734696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:30.734797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:30.734812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:30.734841 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:30.734852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:30.734882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:30.734911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:30.734922 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:30.735001 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:30.735019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:30.735030 | orchestrator | 2026-04-16 09:48:30.735042 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-04-16 09:48:30.735054 | orchestrator | Thursday 16 April 2026 09:48:29 +0000 (0:00:03.802) 0:01:02.367 ******** 2026-04-16 09:48:30.735065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:30.735094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:31.638306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:31.638431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:31.638482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638601 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:31.638622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638725 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:31.638746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:31.638780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:35.501883 | orchestrator | ok: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:35.502171 | orchestrator | ok: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:35.502197 | orchestrator | ok: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:35.502231 | orchestrator | 2026-04-16 09:48:35.502247 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-04-16 09:48:35.502269 | orchestrator | Thursday 16 April 2026 09:48:33 +0000 (0:00:03.675) 0:01:06.043 ******** 2026-04-16 09:48:35.502287 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:48:35.502307 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:48:35.502326 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:48:35.502345 | orchestrator | 2026-04-16 09:48:35.502365 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-04-16 09:48:35.502385 | orchestrator | Thursday 16 April 2026 09:48:34 +0000 (0:00:01.286) 0:01:07.329 ******** 2026-04-16 09:48:35.502407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:35.502430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:48:35.502466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:35.502488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:35.502513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:35.502526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:48:35.502539 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:48:35.502553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:35.502574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:48:38.675114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675343 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:48:38.675356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:38.675385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:48:38.675396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:48:38.675446 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:48:38.675455 | orchestrator | 2026-04-16 09:48:38.675465 | orchestrator | TASK [service-check-containers : designate | Check containers] ***************** 2026-04-16 09:48:38.675476 | orchestrator | Thursday 16 April 2026 09:48:37 +0000 (0:00:02.148) 0:01:09.478 ******** 2026-04-16 09:48:38.675485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:48:38.675502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:48:42.074841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:48:42.075009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:42.075189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:46.021455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:46.021582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:46.021607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:48:46.021621 | orchestrator | 2026-04-16 09:48:46.021634 | orchestrator | TASK [service-check-containers : designate | Notify handlers to restart containers] *** 2026-04-16 09:48:46.021644 | orchestrator | Thursday 16 April 2026 09:48:44 +0000 (0:00:07.019) 0:01:16.497 ******** 2026-04-16 09:48:46.021656 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:48:46.021667 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:48:46.021677 | orchestrator | } 2026-04-16 09:48:46.021687 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:48:46.021696 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:48:46.021706 | orchestrator | } 2026-04-16 09:48:46.021715 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:48:46.021750 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:48:46.021760 | orchestrator | } 2026-04-16 09:48:46.021770 | orchestrator | 2026-04-16 09:48:46.021780 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:48:46.021790 | orchestrator | Thursday 16 April 2026 09:48:45 +0000 (0:00:01.391) 0:01:17.889 ******** 2026-04-16 09:48:46.021801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:46.021847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:48:46.021861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:48:46.021872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:48:46.021882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:48:46.021901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:48:46.021911 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:48:46.021922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:48:46.021979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:49:33.303976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304125 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:49:33.304153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:49:33.304184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-04-16 09:49:33.304197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:49:33.304249 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:49:33.304260 | orchestrator | 2026-04-16 09:49:33.304271 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-04-16 09:49:33.304284 | orchestrator | Thursday 16 April 2026 09:48:47 +0000 (0:00:02.086) 0:01:19.975 ******** 2026-04-16 09:49:33.304295 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:49:33.304305 | orchestrator | 2026-04-16 09:49:33.304316 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-16 09:49:33.304325 | orchestrator | Thursday 16 April 2026 09:49:03 +0000 (0:00:15.556) 0:01:35.532 ******** 2026-04-16 09:49:33.304336 | orchestrator | 2026-04-16 09:49:33.304346 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-16 09:49:33.304357 | orchestrator | Thursday 16 April 2026 09:49:03 +0000 (0:00:00.597) 0:01:36.130 ******** 2026-04-16 09:49:33.304367 | orchestrator | 2026-04-16 09:49:33.304377 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-04-16 09:49:33.304387 | orchestrator | Thursday 16 April 2026 09:49:04 +0000 (0:00:00.442) 0:01:36.572 ******** 2026-04-16 09:49:33.304398 | orchestrator | 2026-04-16 09:49:33.304409 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-04-16 09:49:33.304425 | orchestrator | Thursday 16 April 2026 09:49:04 +0000 (0:00:00.783) 0:01:37.356 ******** 2026-04-16 09:49:33.304436 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:49:33.304446 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:49:33.304457 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:49:33.304468 | orchestrator | 2026-04-16 09:49:33.304478 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-04-16 09:49:33.304488 | orchestrator | Thursday 16 April 2026 09:49:19 +0000 (0:00:14.998) 0:01:52.354 ******** 2026-04-16 09:49:33.304499 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:49:33.304510 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:49:33.304521 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:49:33.304531 | orchestrator | 2026-04-16 09:49:33.304542 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-04-16 09:49:33.304559 | orchestrator | Thursday 16 April 2026 09:49:33 +0000 (0:00:13.342) 0:02:05.697 ******** 2026-04-16 09:51:27.427792 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:51:27.427965 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:51:27.427980 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:51:27.427991 | orchestrator | 2026-04-16 09:51:27.428004 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-04-16 09:51:27.428017 | orchestrator | Thursday 16 April 2026 09:49:46 +0000 (0:00:13.289) 0:02:18.987 ******** 2026-04-16 09:51:27.428051 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:51:27.428062 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:51:27.428073 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:51:27.428085 | orchestrator | 2026-04-16 09:51:27.428096 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-04-16 09:51:27.428107 | orchestrator | Thursday 16 April 2026 09:50:50 +0000 (0:01:03.656) 0:03:22.643 ******** 2026-04-16 09:51:27.428118 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:51:27.428129 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:51:27.428140 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:51:27.428151 | orchestrator | 2026-04-16 09:51:27.428162 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-04-16 09:51:27.428173 | orchestrator | Thursday 16 April 2026 09:51:03 +0000 (0:00:13.451) 0:03:36.095 ******** 2026-04-16 09:51:27.428184 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:51:27.428194 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:51:27.428205 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:51:27.428216 | orchestrator | 2026-04-16 09:51:27.428227 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-04-16 09:51:27.428238 | orchestrator | Thursday 16 April 2026 09:51:17 +0000 (0:00:14.135) 0:03:50.230 ******** 2026-04-16 09:51:27.428249 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:51:27.428260 | orchestrator | 2026-04-16 09:51:27.428271 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:51:27.428283 | orchestrator | testbed-node-0 : ok=22  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-04-16 09:51:27.428295 | orchestrator | testbed-node-1 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 09:51:27.428306 | orchestrator | testbed-node-2 : ok=20  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 09:51:27.428316 | orchestrator | 2026-04-16 09:51:27.428327 | orchestrator | 2026-04-16 09:51:27.428339 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:51:27.428351 | orchestrator | Thursday 16 April 2026 09:51:27 +0000 (0:00:09.328) 0:03:59.558 ******** 2026-04-16 09:51:27.428363 | orchestrator | =============================================================================== 2026-04-16 09:51:27.428374 | orchestrator | designate : Restart designate-producer container ----------------------- 63.66s 2026-04-16 09:51:27.428384 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.56s 2026-04-16 09:51:27.428394 | orchestrator | designate : Copying over designate.conf -------------------------------- 15.46s 2026-04-16 09:51:27.428404 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 15.00s 2026-04-16 09:51:27.428413 | orchestrator | designate : Restart designate-worker container ------------------------- 14.14s 2026-04-16 09:51:27.428423 | orchestrator | designate : Restart designate-mdns container --------------------------- 13.45s 2026-04-16 09:51:27.428433 | orchestrator | designate : Restart designate-api container ---------------------------- 13.34s 2026-04-16 09:51:27.428442 | orchestrator | designate : Restart designate-central container ------------------------ 13.29s 2026-04-16 09:51:27.428452 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 9.33s 2026-04-16 09:51:27.428461 | orchestrator | designate : Copying over config.json files for services ----------------- 7.35s 2026-04-16 09:51:27.428472 | orchestrator | service-check-containers : designate | Check containers ----------------- 7.02s 2026-04-16 09:51:27.428482 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.75s 2026-04-16 09:51:27.428491 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.54s 2026-04-16 09:51:27.428502 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.22s 2026-04-16 09:51:27.428521 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.80s 2026-04-16 09:51:27.428531 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.68s 2026-04-16 09:51:27.428542 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.51s 2026-04-16 09:51:27.428553 | orchestrator | designate : include_tasks ----------------------------------------------- 2.91s 2026-04-16 09:51:27.428562 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS key --- 2.58s 2026-04-16 09:51:27.428587 | orchestrator | service-cert-copy : designate | Copying over backend internal TLS certificate --- 2.36s 2026-04-16 09:51:27.588471 | orchestrator | + osism apply -a upgrade ceilometer 2026-04-16 09:51:28.805003 | orchestrator | 2026-04-16 09:51:28 | INFO  | Prepare task for execution of ceilometer. 2026-04-16 09:51:28.866436 | orchestrator | 2026-04-16 09:51:28 | INFO  | Task 8e85bc16-87f1-425b-8ebe-f7f2a925301f (ceilometer) was prepared for execution. 2026-04-16 09:51:28.866550 | orchestrator | 2026-04-16 09:51:28 | INFO  | It takes a moment until task 8e85bc16-87f1-425b-8ebe-f7f2a925301f (ceilometer) has been started and output is visible here. 2026-04-16 09:51:40.557578 | orchestrator | 2026-04-16 09:51:40.557750 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:51:40.557777 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:51:40.557796 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:51:40.557858 | orchestrator | 2026-04-16 09:51:40.557874 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:51:40.557889 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:51:40.557903 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:51:40.557933 | orchestrator | Thursday 16 April 2026 09:51:33 +0000 (0:00:01.467) 0:00:01.467 ******** 2026-04-16 09:51:40.557948 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:51:40.557964 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:51:40.557979 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:51:40.557995 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:51:40.558009 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:51:40.558091 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:51:40.558108 | orchestrator | 2026-04-16 09:51:40.558124 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:51:40.558140 | orchestrator | Thursday 16 April 2026 09:51:34 +0000 (0:00:01.164) 0:00:02.632 ******** 2026-04-16 09:51:40.558155 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-04-16 09:51:40.558171 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-04-16 09:51:40.558186 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-04-16 09:51:40.558201 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-04-16 09:51:40.558217 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-04-16 09:51:40.558231 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-04-16 09:51:40.558246 | orchestrator | 2026-04-16 09:51:40.558261 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-04-16 09:51:40.558277 | orchestrator | 2026-04-16 09:51:40.558293 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-16 09:51:40.558308 | orchestrator | Thursday 16 April 2026 09:51:35 +0000 (0:00:01.091) 0:00:03.723 ******** 2026-04-16 09:51:40.558324 | orchestrator | included: /ansible/roles/ceilometer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:51:40.558342 | orchestrator | 2026-04-16 09:51:40.558356 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-04-16 09:51:40.558405 | orchestrator | Thursday 16 April 2026 09:51:37 +0000 (0:00:01.512) 0:00:05.235 ******** 2026-04-16 09:51:40.558429 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:51:40.558451 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:51:40.558487 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:51:40.558535 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:40.558555 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:40.558572 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:40.558601 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:40.558619 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:40.558641 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:40.558657 | orchestrator | 2026-04-16 09:51:40.558672 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-04-16 09:51:40.558687 | orchestrator | Thursday 16 April 2026 09:51:39 +0000 (0:00:02.113) 0:00:07.349 ******** 2026-04-16 09:51:40.558711 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 09:51:44.084520 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:51:44.084600 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:51:44.084605 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 09:51:44.084610 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:51:44.084614 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:51:44.084618 | orchestrator | 2026-04-16 09:51:44.084624 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-04-16 09:51:44.084629 | orchestrator | Thursday 16 April 2026 09:51:41 +0000 (0:00:02.227) 0:00:09.577 ******** 2026-04-16 09:51:44.084633 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:51:44.084638 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:51:44.084642 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:51:44.084646 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:51:44.084650 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:51:44.084654 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:51:44.084658 | orchestrator | 2026-04-16 09:51:44.084662 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-04-16 09:51:44.084666 | orchestrator | Thursday 16 April 2026 09:51:42 +0000 (0:00:00.477) 0:00:10.054 ******** 2026-04-16 09:51:44.084670 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:44.084674 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:44.084677 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:44.084700 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:44.084704 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:44.084708 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:44.084712 | orchestrator | 2026-04-16 09:51:44.084716 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-04-16 09:51:44.084720 | orchestrator | Thursday 16 April 2026 09:51:42 +0000 (0:00:00.557) 0:00:10.612 ******** 2026-04-16 09:51:44.084724 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:51:44.084728 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:51:44.084731 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:51:44.084735 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:51:44.084739 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:51:44.084743 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:51:44.084746 | orchestrator | 2026-04-16 09:51:44.084750 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-04-16 09:51:44.084754 | orchestrator | Thursday 16 April 2026 09:51:43 +0000 (0:00:00.533) 0:00:11.145 ******** 2026-04-16 09:51:44.084760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:44.084768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:44.084773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:44.084800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:44.084804 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:44.084808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:44.084856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:44.084860 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:44.084864 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:44.084868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:44.084872 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:44.084876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:44.084880 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:44.084886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:44.084890 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:44.084894 | orchestrator | 2026-04-16 09:51:44.084898 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-04-16 09:51:44.084902 | orchestrator | Thursday 16 April 2026 09:51:43 +0000 (0:00:00.671) 0:00:11.817 ******** 2026-04-16 09:51:44.084910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:49.677027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:49.677141 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:49.677160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:49.677175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:49.677188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:49.677199 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:49.677227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:49.677240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:49.677273 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:49.677304 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:49.677317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:49.677328 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:49.677345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:49.677364 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:49.677383 | orchestrator | 2026-04-16 09:51:49.677411 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-04-16 09:51:49.677436 | orchestrator | Thursday 16 April 2026 09:51:44 +0000 (0:00:00.864) 0:00:12.682 ******** 2026-04-16 09:51:49.677456 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:51:49.677473 | orchestrator | 2026-04-16 09:51:49.677491 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-04-16 09:51:49.677509 | orchestrator | Thursday 16 April 2026 09:51:45 +0000 (0:00:00.701) 0:00:13.383 ******** 2026-04-16 09:51:49.677527 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:51:49.677546 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:51:49.677566 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:51:49.677586 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:51:49.677605 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:51:49.677624 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:51:49.677639 | orchestrator | 2026-04-16 09:51:49.677651 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-04-16 09:51:49.677664 | orchestrator | Thursday 16 April 2026 09:51:45 +0000 (0:00:00.458) 0:00:13.841 ******** 2026-04-16 09:51:49.677677 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:51:49.677690 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:51:49.677702 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:51:49.677715 | orchestrator | ok: [testbed-node-3] 2026-04-16 09:51:49.677727 | orchestrator | ok: [testbed-node-4] 2026-04-16 09:51:49.677739 | orchestrator | ok: [testbed-node-5] 2026-04-16 09:51:49.677764 | orchestrator | 2026-04-16 09:51:49.677777 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-04-16 09:51:49.677790 | orchestrator | Thursday 16 April 2026 09:51:46 +0000 (0:00:00.998) 0:00:14.839 ******** 2026-04-16 09:51:49.677803 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:49.677841 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:49.677853 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:49.677865 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:49.677877 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:49.677890 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:49.677902 | orchestrator | 2026-04-16 09:51:49.677913 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-04-16 09:51:49.677924 | orchestrator | Thursday 16 April 2026 09:51:47 +0000 (0:00:00.522) 0:00:15.362 ******** 2026-04-16 09:51:49.677935 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:49.677945 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:49.677964 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:49.677976 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:49.677986 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:49.677997 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:49.678008 | orchestrator | 2026-04-16 09:51:49.678086 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-04-16 09:51:49.678101 | orchestrator | Thursday 16 April 2026 09:51:48 +0000 (0:00:00.660) 0:00:16.022 ******** 2026-04-16 09:51:49.678112 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:51:49.678123 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 09:51:49.678134 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 09:51:49.678144 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:51:49.678155 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:51:49.678166 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:51:49.678177 | orchestrator | 2026-04-16 09:51:49.678188 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-04-16 09:51:49.678199 | orchestrator | Thursday 16 April 2026 09:51:49 +0000 (0:00:01.350) 0:00:17.373 ******** 2026-04-16 09:51:49.678227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:52.384984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:52.385116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385124 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:52.385132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:52.385150 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:52.385157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385171 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:52.385191 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:52.385198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385204 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:52.385211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385222 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:52.385229 | orchestrator | 2026-04-16 09:51:52.385236 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-04-16 09:51:52.385244 | orchestrator | Thursday 16 April 2026 09:51:50 +0000 (0:00:00.746) 0:00:18.119 ******** 2026-04-16 09:51:52.385250 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:52.385257 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:52.385263 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:52.385269 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:52.385275 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:52.385281 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:52.385287 | orchestrator | 2026-04-16 09:51:52.385294 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-04-16 09:51:52.385300 | orchestrator | Thursday 16 April 2026 09:51:50 +0000 (0:00:00.614) 0:00:18.733 ******** 2026-04-16 09:51:52.385306 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:51:52.385312 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 09:51:52.385318 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 09:51:52.385325 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:51:52.385331 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:51:52.385337 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:51:52.385343 | orchestrator | 2026-04-16 09:51:52.385349 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-04-16 09:51:52.385355 | orchestrator | Thursday 16 April 2026 09:51:52 +0000 (0:00:01.246) 0:00:19.979 ******** 2026-04-16 09:51:52.385365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:52.385373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:52.385379 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:52.385392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:57.543961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:57.544089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:57.544105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:57.544129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:57.544139 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:57.544149 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:57.544157 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:57.544166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:57.544174 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:57.544237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:57.544251 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:57.544261 | orchestrator | 2026-04-16 09:51:57.544271 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-04-16 09:51:57.544282 | orchestrator | Thursday 16 April 2026 09:51:53 +0000 (0:00:01.038) 0:00:21.018 ******** 2026-04-16 09:51:57.544291 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:57.544300 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:57.544308 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:57.544316 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:57.544324 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:57.544332 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:57.544341 | orchestrator | 2026-04-16 09:51:57.544349 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-04-16 09:51:57.544357 | orchestrator | Thursday 16 April 2026 09:51:53 +0000 (0:00:00.581) 0:00:21.599 ******** 2026-04-16 09:51:57.544365 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:57.544373 | orchestrator | 2026-04-16 09:51:57.544381 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-04-16 09:51:57.544389 | orchestrator | Thursday 16 April 2026 09:51:53 +0000 (0:00:00.129) 0:00:21.729 ******** 2026-04-16 09:51:57.544397 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:57.544406 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:57.544414 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:57.544422 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:51:57.544430 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:51:57.544438 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:51:57.544446 | orchestrator | 2026-04-16 09:51:57.544454 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-04-16 09:51:57.544463 | orchestrator | Thursday 16 April 2026 09:51:54 +0000 (0:00:00.766) 0:00:22.496 ******** 2026-04-16 09:51:57.544473 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 09:51:57.544483 | orchestrator | 2026-04-16 09:51:57.544493 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-04-16 09:51:57.544502 | orchestrator | Thursday 16 April 2026 09:51:56 +0000 (0:00:01.644) 0:00:24.141 ******** 2026-04-16 09:51:57.544513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:51:57.544531 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:51:57.544557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:51:57.544581 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:59.253733 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:59.253903 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:59.253918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:59.253944 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:59.253971 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:51:59.253979 | orchestrator | 2026-04-16 09:51:59.253988 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-04-16 09:51:59.253997 | orchestrator | Thursday 16 April 2026 09:51:58 +0000 (0:00:02.345) 0:00:26.486 ******** 2026-04-16 09:51:59.254066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:59.254078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:59.254085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:59.254093 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:51:59.254102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:59.254120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:51:59.254128 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:51:59.254135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:59.254142 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:51:59.254149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:51:59.254161 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:02.510764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.510886 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:02.510898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.510906 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:02.510912 | orchestrator | 2026-04-16 09:52:02.510920 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-04-16 09:52:02.510948 | orchestrator | Thursday 16 April 2026 09:51:59 +0000 (0:00:00.991) 0:00:27.478 ******** 2026-04-16 09:52:02.510968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:02.510976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.510983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:02.511027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.511039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:02.511049 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:52:02.511058 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:52:02.511068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.511085 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:02.511099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.511108 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:52:02.511118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.511128 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:02.511137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:02.511147 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:02.511156 | orchestrator | 2026-04-16 09:52:02.511166 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-04-16 09:52:02.511176 | orchestrator | Thursday 16 April 2026 09:52:01 +0000 (0:00:01.678) 0:00:29.156 ******** 2026-04-16 09:52:02.511196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:06.398767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:06.398942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:06.398972 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.398984 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.398994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.399004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.399032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.399050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.399060 | orchestrator | 2026-04-16 09:52:06.399070 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-04-16 09:52:06.399081 | orchestrator | Thursday 16 April 2026 09:52:03 +0000 (0:00:02.291) 0:00:31.448 ******** 2026-04-16 09:52:06.399095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:06.399105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:06.399115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:06.399130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:16.043451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:16.043544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:16.043568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:16.043576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:16.043584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:16.043591 | orchestrator | 2026-04-16 09:52:16.043599 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-04-16 09:52:16.043608 | orchestrator | Thursday 16 April 2026 09:52:08 +0000 (0:00:04.632) 0:00:36.081 ******** 2026-04-16 09:52:16.043616 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:52:16.043623 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 09:52:16.043630 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 09:52:16.043637 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:52:16.043660 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:52:16.043667 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:52:16.043674 | orchestrator | 2026-04-16 09:52:16.043681 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-04-16 09:52:16.043688 | orchestrator | Thursday 16 April 2026 09:52:09 +0000 (0:00:01.377) 0:00:37.458 ******** 2026-04-16 09:52:16.043695 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:52:16.043701 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:52:16.043708 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:52:16.043715 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:16.043721 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:16.043743 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:16.043750 | orchestrator | 2026-04-16 09:52:16.043756 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-04-16 09:52:16.043764 | orchestrator | Thursday 16 April 2026 09:52:10 +0000 (0:00:00.537) 0:00:37.996 ******** 2026-04-16 09:52:16.043771 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:16.043777 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:16.043784 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:16.043791 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:52:16.043843 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:52:16.043849 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:52:16.043855 | orchestrator | 2026-04-16 09:52:16.043862 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-04-16 09:52:16.043868 | orchestrator | Thursday 16 April 2026 09:52:11 +0000 (0:00:01.389) 0:00:39.385 ******** 2026-04-16 09:52:16.043874 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:16.043880 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:16.043886 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:16.043892 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:52:16.043898 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:52:16.043905 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:52:16.043911 | orchestrator | 2026-04-16 09:52:16.043917 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-04-16 09:52:16.043923 | orchestrator | Thursday 16 April 2026 09:52:12 +0000 (0:00:01.509) 0:00:40.895 ******** 2026-04-16 09:52:16.043930 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 09:52:16.043936 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 09:52:16.043942 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 09:52:16.043948 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 09:52:16.043954 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 09:52:16.043960 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 09:52:16.043966 | orchestrator | 2026-04-16 09:52:16.043972 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-04-16 09:52:16.043979 | orchestrator | Thursday 16 April 2026 09:52:14 +0000 (0:00:01.510) 0:00:42.406 ******** 2026-04-16 09:52:16.043990 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:16.043998 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:16.044012 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:16.044020 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:16.044033 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:17.516720 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:17.516847 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:17.516861 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:17.516900 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:17.516908 | orchestrator | 2026-04-16 09:52:17.516917 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-04-16 09:52:17.516925 | orchestrator | Thursday 16 April 2026 09:52:16 +0000 (0:00:02.394) 0:00:44.800 ******** 2026-04-16 09:52:17.516933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:17.516941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:17.516948 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:52:17.516971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:17.516982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:17.516998 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:52:17.517005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:17.517013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:17.517020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:17.517027 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:52:17.517034 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:17.517046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.951502 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:20.951627 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.951653 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:20.951671 | orchestrator | 2026-04-16 09:52:20.951710 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-04-16 09:52:20.951728 | orchestrator | Thursday 16 April 2026 09:52:17 +0000 (0:00:00.821) 0:00:45.622 ******** 2026-04-16 09:52:20.951767 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:52:20.951785 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:52:20.951926 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:52:20.951946 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:20.951964 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:20.951981 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:20.951999 | orchestrator | 2026-04-16 09:52:20.952017 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-04-16 09:52:20.952035 | orchestrator | Thursday 16 April 2026 09:52:18 +0000 (0:00:00.759) 0:00:46.381 ******** 2026-04-16 09:52:20.952056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:20.952078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.952097 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:52:20.952116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:20.952136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.952152 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:52:20.952196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:20.952242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.952261 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:52:20.952279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.952298 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:52:20.952316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.952336 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:52:20.952355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:20.952371 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:52:20.952388 | orchestrator | 2026-04-16 09:52:20.952405 | orchestrator | TASK [service-check-containers : ceilometer | Check containers] **************** 2026-04-16 09:52:20.952423 | orchestrator | Thursday 16 April 2026 09:52:19 +0000 (0:00:01.210) 0:00:47.592 ******** 2026-04-16 09:52:20.952453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:23.284099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:23.284207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:23.284224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:23.284236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-04-16 09:52:23.284248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:23.284261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:23.284325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:23.284339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-04-16 09:52:23.284352 | orchestrator | 2026-04-16 09:52:23.284365 | orchestrator | TASK [service-check-containers : ceilometer | Notify handlers to restart containers] *** 2026-04-16 09:52:23.284377 | orchestrator | Thursday 16 April 2026 09:52:22 +0000 (0:00:02.454) 0:00:50.046 ******** 2026-04-16 09:52:23.284389 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:52:23.284400 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:52:23.284413 | orchestrator | } 2026-04-16 09:52:23.284424 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:52:23.284435 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:52:23.284445 | orchestrator | } 2026-04-16 09:52:23.284456 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:52:23.284466 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:52:23.284477 | orchestrator | } 2026-04-16 09:52:23.284488 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 09:52:23.284499 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:52:23.284509 | orchestrator | } 2026-04-16 09:52:23.284520 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 09:52:23.284531 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:52:23.284541 | orchestrator | } 2026-04-16 09:52:23.284552 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 09:52:23.284563 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:52:23.284574 | orchestrator | } 2026-04-16 09:52:23.284586 | orchestrator | 2026-04-16 09:52:23.284599 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:52:23.284612 | orchestrator | Thursday 16 April 2026 09:52:22 +0000 (0:00:00.619) 0:00:50.665 ******** 2026-04-16 09:52:23.284625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:52:23.284639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:52:23.284660 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:52:23.284681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:53:10.816718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:53:10.817011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-04-16 09:53:10.817048 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:53:10.817072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:53:10.817093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:53:10.817145 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:53:10.817166 | orchestrator | skipping: [testbed-node-3] 2026-04-16 09:53:10.817187 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:53:10.817209 | orchestrator | skipping: [testbed-node-4] 2026-04-16 09:53:10.817259 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-04-16 09:53:10.817281 | orchestrator | skipping: [testbed-node-5] 2026-04-16 09:53:10.817303 | orchestrator | 2026-04-16 09:53:10.817326 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-04-16 09:53:10.817348 | orchestrator | Thursday 16 April 2026 09:52:24 +0000 (0:00:01.887) 0:00:52.553 ******** 2026-04-16 09:53:10.817369 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:53:10.817390 | orchestrator | 2026-04-16 09:53:10.817411 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 09:53:10.817444 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:08.551) 0:01:01.104 ******** 2026-04-16 09:53:10.817457 | orchestrator | 2026-04-16 09:53:10.817469 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 09:53:10.817481 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:00.082) 0:01:01.187 ******** 2026-04-16 09:53:10.817494 | orchestrator | 2026-04-16 09:53:10.817506 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 09:53:10.817518 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:00.070) 0:01:01.258 ******** 2026-04-16 09:53:10.817531 | orchestrator | 2026-04-16 09:53:10.817543 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 09:53:10.817554 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:00.069) 0:01:01.328 ******** 2026-04-16 09:53:10.817565 | orchestrator | 2026-04-16 09:53:10.817576 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 09:53:10.817587 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:00.071) 0:01:01.399 ******** 2026-04-16 09:53:10.817598 | orchestrator | 2026-04-16 09:53:10.817609 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-04-16 09:53:10.817620 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:00.072) 0:01:01.472 ******** 2026-04-16 09:53:10.817630 | orchestrator | 2026-04-16 09:53:10.817641 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-04-16 09:53:10.817652 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 09:53:10.817664 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 09:53:10.817686 | orchestrator | Thursday 16 April 2026 09:52:33 +0000 (0:00:00.071) 0:01:01.543 ******** 2026-04-16 09:53:10.817708 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:53:10.817719 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:53:10.817731 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:53:10.817742 | orchestrator | 2026-04-16 09:53:10.817752 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-04-16 09:53:10.817792 | orchestrator | Thursday 16 April 2026 09:52:45 +0000 (0:00:12.058) 0:01:13.602 ******** 2026-04-16 09:53:10.817803 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:53:10.817814 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:53:10.817825 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:53:10.817836 | orchestrator | 2026-04-16 09:53:10.817846 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-04-16 09:53:10.817857 | orchestrator | Thursday 16 April 2026 09:52:57 +0000 (0:00:11.901) 0:01:25.504 ******** 2026-04-16 09:53:10.817868 | orchestrator | changed: [testbed-node-3] 2026-04-16 09:53:10.817879 | orchestrator | changed: [testbed-node-4] 2026-04-16 09:53:10.817890 | orchestrator | changed: [testbed-node-5] 2026-04-16 09:53:10.817901 | orchestrator | 2026-04-16 09:53:10.817912 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:53:10.817924 | orchestrator | testbed-node-0 : ok=26  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-04-16 09:53:10.817936 | orchestrator | testbed-node-1 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 09:53:10.817947 | orchestrator | testbed-node-2 : ok=24  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 09:53:10.817958 | orchestrator | testbed-node-3 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 09:53:10.817969 | orchestrator | testbed-node-4 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 09:53:10.817981 | orchestrator | testbed-node-5 : ok=21  changed=5  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-04-16 09:53:10.817992 | orchestrator | 2026-04-16 09:53:10.818003 | orchestrator | 2026-04-16 09:53:10.818071 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:53:10.818084 | orchestrator | Thursday 16 April 2026 09:53:10 +0000 (0:00:13.196) 0:01:38.701 ******** 2026-04-16 09:53:10.818095 | orchestrator | =============================================================================== 2026-04-16 09:53:10.818106 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 13.20s 2026-04-16 09:53:10.818117 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 12.06s 2026-04-16 09:53:10.818127 | orchestrator | ceilometer : Restart ceilometer-central container ---------------------- 11.90s 2026-04-16 09:53:10.818138 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 8.55s 2026-04-16 09:53:10.818149 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.63s 2026-04-16 09:53:10.818173 | orchestrator | service-check-containers : ceilometer | Check containers ---------------- 2.45s 2026-04-16 09:53:11.192691 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.39s 2026-04-16 09:53:11.192789 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.35s 2026-04-16 09:53:11.192798 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.29s 2026-04-16 09:53:11.192803 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 2.23s 2026-04-16 09:53:11.192809 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 2.11s 2026-04-16 09:53:11.192829 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.89s 2026-04-16 09:53:11.192849 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.68s 2026-04-16 09:53:11.192855 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 1.64s 2026-04-16 09:53:11.192860 | orchestrator | ceilometer : include_tasks ---------------------------------------------- 1.51s 2026-04-16 09:53:11.192865 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.51s 2026-04-16 09:53:11.192870 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.51s 2026-04-16 09:53:11.192875 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.39s 2026-04-16 09:53:11.192882 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.38s 2026-04-16 09:53:11.192890 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.35s 2026-04-16 09:53:11.355308 | orchestrator | + osism apply -a upgrade aodh 2026-04-16 09:53:12.612556 | orchestrator | 2026-04-16 09:53:12 | INFO  | Prepare task for execution of aodh. 2026-04-16 09:53:12.674348 | orchestrator | 2026-04-16 09:53:12 | INFO  | Task 91fdf20e-c24e-4592-99d2-c21116a75c18 (aodh) was prepared for execution. 2026-04-16 09:53:12.674467 | orchestrator | 2026-04-16 09:53:12 | INFO  | It takes a moment until task 91fdf20e-c24e-4592-99d2-c21116a75c18 (aodh) has been started and output is visible here. 2026-04-16 09:53:25.476542 | orchestrator | 2026-04-16 09:53:25.476641 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:53:25.476654 | orchestrator | 2026-04-16 09:53:25.476663 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:53:25.476671 | orchestrator | Thursday 16 April 2026 09:53:17 +0000 (0:00:01.527) 0:00:01.527 ******** 2026-04-16 09:53:25.476680 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:53:25.476689 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:53:25.476697 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:53:25.476705 | orchestrator | 2026-04-16 09:53:25.476713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:53:25.476721 | orchestrator | Thursday 16 April 2026 09:53:19 +0000 (0:00:01.706) 0:00:03.234 ******** 2026-04-16 09:53:25.476730 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-04-16 09:53:25.476738 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-04-16 09:53:25.476798 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-04-16 09:53:25.476809 | orchestrator | 2026-04-16 09:53:25.476817 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-04-16 09:53:25.476825 | orchestrator | 2026-04-16 09:53:25.476833 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-16 09:53:25.476841 | orchestrator | Thursday 16 April 2026 09:53:20 +0000 (0:00:01.857) 0:00:05.092 ******** 2026-04-16 09:53:25.476849 | orchestrator | included: /ansible/roles/aodh/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:53:25.476858 | orchestrator | 2026-04-16 09:53:25.476866 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-04-16 09:53:25.476874 | orchestrator | Thursday 16 April 2026 09:53:23 +0000 (0:00:02.353) 0:00:07.445 ******** 2026-04-16 09:53:25.476886 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:25.476929 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:25.476954 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:25.476965 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:25.476974 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:25.476982 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:25.476997 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:25.477010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:25.477018 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:25.477034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:29.770720 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:29.770890 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:29.770909 | orchestrator | 2026-04-16 09:53:29.770922 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-04-16 09:53:29.770933 | orchestrator | Thursday 16 April 2026 09:53:26 +0000 (0:00:03.482) 0:00:10.928 ******** 2026-04-16 09:53:29.770968 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:53:29.770979 | orchestrator | 2026-04-16 09:53:29.770989 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-04-16 09:53:29.770999 | orchestrator | Thursday 16 April 2026 09:53:27 +0000 (0:00:01.071) 0:00:12.000 ******** 2026-04-16 09:53:29.771008 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:53:29.771018 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:53:29.771027 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:53:29.771037 | orchestrator | 2026-04-16 09:53:29.771047 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-04-16 09:53:29.771056 | orchestrator | Thursday 16 April 2026 09:53:29 +0000 (0:00:01.331) 0:00:13.331 ******** 2026-04-16 09:53:29.771082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:29.771097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:29.771108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:29.771137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:29.771148 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:53:29.771159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:29.771179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:29.771195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:29.771205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:29.771215 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:53:29.771234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:35.369167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:35.369297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:35.369321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:35.369336 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:53:35.369354 | orchestrator | 2026-04-16 09:53:35.369364 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-04-16 09:53:35.369377 | orchestrator | Thursday 16 April 2026 09:53:30 +0000 (0:00:01.726) 0:00:15.058 ******** 2026-04-16 09:53:35.369391 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:53:35.369405 | orchestrator | 2026-04-16 09:53:35.369420 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-04-16 09:53:35.369433 | orchestrator | Thursday 16 April 2026 09:53:32 +0000 (0:00:01.640) 0:00:16.699 ******** 2026-04-16 09:53:35.369463 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:35.369494 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:35.369514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:35.369529 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:35.369542 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:35.369563 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:35.369577 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:35.369599 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:38.496121 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:38.496213 | orchestrator | ok: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:38.496227 | orchestrator | ok: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:38.496253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:38.496262 | orchestrator | 2026-04-16 09:53:38.496272 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-04-16 09:53:38.496283 | orchestrator | Thursday 16 April 2026 09:53:37 +0000 (0:00:05.118) 0:00:21.817 ******** 2026-04-16 09:53:38.496295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:38.496325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:38.496357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:38.496364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:38.496370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:38.496379 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:53:38.496386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:38.496391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:38.496409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:40.503835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:40.503929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:40.503944 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:53:40.503969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:40.503980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:40.503989 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:53:40.503998 | orchestrator | 2026-04-16 09:53:40.504007 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-04-16 09:53:40.504018 | orchestrator | Thursday 16 April 2026 09:53:39 +0000 (0:00:02.216) 0:00:24.033 ******** 2026-04-16 09:53:40.504053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:40.504082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:40.504093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:40.504103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:40.504118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:40.504128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:53:40.504144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:40.504154 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:53:40.504170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:45.281724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:53:45.282168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:45.282200 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:53:45.282233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:53:45.282314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:53:45.282330 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:53:45.282344 | orchestrator | 2026-04-16 09:53:45.282358 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-04-16 09:53:45.282371 | orchestrator | Thursday 16 April 2026 09:53:41 +0000 (0:00:02.065) 0:00:26.099 ******** 2026-04-16 09:53:45.282385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:45.282429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:45.282450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:45.282473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:45.282491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:45.282510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:45.282528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:45.282558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549391 | orchestrator | 2026-04-16 09:53:53.549411 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-04-16 09:53:53.549428 | orchestrator | Thursday 16 April 2026 09:53:47 +0000 (0:00:05.609) 0:00:31.708 ******** 2026-04-16 09:53:53.549446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:53.549490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:53.549519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:53:53.549547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:53:53.549597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:02.387952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388088 | orchestrator | 2026-04-16 09:54:02.388099 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-04-16 09:54:02.388108 | orchestrator | Thursday 16 April 2026 09:53:56 +0000 (0:00:09.175) 0:00:40.885 ******** 2026-04-16 09:54:02.388117 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:54:02.388137 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:54:02.388153 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:54:02.388161 | orchestrator | 2026-04-16 09:54:02.388169 | orchestrator | TASK [service-check-containers : aodh | Check containers] ********************** 2026-04-16 09:54:02.388177 | orchestrator | Thursday 16 April 2026 09:53:59 +0000 (0:00:02.714) 0:00:43.599 ******** 2026-04-16 09:54:02.388187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:54:02.388239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:54:02.388250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 09:54:02.388260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-04-16 09:54:02.388297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:06.377168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:06.377273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:06.377289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:06.377307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:06.377328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-04-16 09:54:06.377349 | orchestrator | 2026-04-16 09:54:06.377371 | orchestrator | TASK [service-check-containers : aodh | Notify handlers to restart containers] *** 2026-04-16 09:54:06.377393 | orchestrator | Thursday 16 April 2026 09:54:04 +0000 (0:00:05.096) 0:00:48.697 ******** 2026-04-16 09:54:06.377444 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:54:06.377473 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:54:06.377495 | orchestrator | } 2026-04-16 09:54:06.377515 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:54:06.377534 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:54:06.377553 | orchestrator | } 2026-04-16 09:54:06.377571 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:54:06.377588 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:54:06.377606 | orchestrator | } 2026-04-16 09:54:06.377624 | orchestrator | 2026-04-16 09:54:06.377644 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:54:06.377665 | orchestrator | Thursday 16 April 2026 09:54:05 +0000 (0:00:01.476) 0:00:50.173 ******** 2026-04-16 09:54:06.377722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:54:06.377784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:54:06.377807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:54:06.377828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:54:06.377849 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:54:06.377864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:54:06.377893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:54:06.377922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:55:30.027257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:55:30.027373 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:55:30.027391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 09:55:30.027407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-04-16 09:55:30.027440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-04-16 09:55:30.027452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-04-16 09:55:30.027462 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:55:30.027472 | orchestrator | 2026-04-16 09:55:30.027483 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-04-16 09:55:30.027494 | orchestrator | Thursday 16 April 2026 09:54:07 +0000 (0:00:02.016) 0:00:52.189 ******** 2026-04-16 09:55:30.027504 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:55:30.027514 | orchestrator | 2026-04-16 09:55:30.027524 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-16 09:55:30.027548 | orchestrator | Thursday 16 April 2026 09:54:25 +0000 (0:00:17.407) 0:01:09.597 ******** 2026-04-16 09:55:30.027559 | orchestrator | 2026-04-16 09:55:30.027568 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-16 09:55:30.027578 | orchestrator | Thursday 16 April 2026 09:54:25 +0000 (0:00:00.454) 0:01:10.051 ******** 2026-04-16 09:55:30.027588 | orchestrator | 2026-04-16 09:55:30.027617 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-04-16 09:55:30.027627 | orchestrator | Thursday 16 April 2026 09:54:26 +0000 (0:00:00.443) 0:01:10.495 ******** 2026-04-16 09:55:30.027637 | orchestrator | 2026-04-16 09:55:30.027647 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-04-16 09:55:30.027657 | orchestrator | Thursday 16 April 2026 09:54:27 +0000 (0:00:00.934) 0:01:11.430 ******** 2026-04-16 09:55:30.027666 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:55:30.027735 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:55:30.027759 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:55:30.027775 | orchestrator | 2026-04-16 09:55:30.027790 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-04-16 09:55:30.027806 | orchestrator | Thursday 16 April 2026 09:54:40 +0000 (0:00:13.234) 0:01:24.665 ******** 2026-04-16 09:55:30.027821 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:55:30.027837 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:55:30.027855 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:55:30.027871 | orchestrator | 2026-04-16 09:55:30.027888 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-04-16 09:55:30.027905 | orchestrator | Thursday 16 April 2026 09:54:53 +0000 (0:00:12.817) 0:01:37.482 ******** 2026-04-16 09:55:30.027922 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:55:30.027938 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:55:30.027955 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:55:30.027987 | orchestrator | 2026-04-16 09:55:30.028004 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-04-16 09:55:30.028020 | orchestrator | Thursday 16 April 2026 09:55:11 +0000 (0:00:17.982) 0:01:55.464 ******** 2026-04-16 09:55:30.028037 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:55:30.028054 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:55:30.028070 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:55:30.028085 | orchestrator | 2026-04-16 09:55:30.028101 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:55:30.028118 | orchestrator | testbed-node-0 : ok=16  changed=9  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 09:55:30.028135 | orchestrator | testbed-node-1 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 09:55:30.028152 | orchestrator | testbed-node-2 : ok=15  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 09:55:30.028170 | orchestrator | 2026-04-16 09:55:30.028186 | orchestrator | 2026-04-16 09:55:30.028204 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:55:30.028220 | orchestrator | Thursday 16 April 2026 09:55:29 +0000 (0:00:18.505) 0:02:13.970 ******** 2026-04-16 09:55:30.028238 | orchestrator | =============================================================================== 2026-04-16 09:55:30.028254 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 18.51s 2026-04-16 09:55:30.028271 | orchestrator | aodh : Restart aodh-listener container --------------------------------- 17.98s 2026-04-16 09:55:30.028287 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 17.41s 2026-04-16 09:55:30.028303 | orchestrator | aodh : Restart aodh-api container -------------------------------------- 13.23s 2026-04-16 09:55:30.028320 | orchestrator | aodh : Restart aodh-evaluator container -------------------------------- 12.82s 2026-04-16 09:55:30.028337 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 9.18s 2026-04-16 09:55:30.028353 | orchestrator | aodh : Copying over config.json files for services ---------------------- 5.61s 2026-04-16 09:55:30.028370 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 5.12s 2026-04-16 09:55:30.028387 | orchestrator | service-check-containers : aodh | Check containers ---------------------- 5.10s 2026-04-16 09:55:30.028402 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 3.48s 2026-04-16 09:55:30.028418 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 2.72s 2026-04-16 09:55:30.028431 | orchestrator | aodh : include_tasks ---------------------------------------------------- 2.35s 2026-04-16 09:55:30.028448 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS certificate --- 2.22s 2026-04-16 09:55:30.028464 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 2.07s 2026-04-16 09:55:30.028481 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-04-16 09:55:30.028497 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.86s 2026-04-16 09:55:30.028514 | orchestrator | aodh : Flush handlers --------------------------------------------------- 1.83s 2026-04-16 09:55:30.028531 | orchestrator | aodh : Copying over existing policy file -------------------------------- 1.73s 2026-04-16 09:55:30.028547 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.71s 2026-04-16 09:55:30.028563 | orchestrator | aodh : include_tasks ---------------------------------------------------- 1.64s 2026-04-16 09:55:30.200165 | orchestrator | ++ semver 10.0.0 7.0.0 2026-04-16 09:55:30.242266 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 09:55:30.242337 | orchestrator | + osism apply -a bootstrap octavia 2026-04-16 09:55:31.485350 | orchestrator | 2026-04-16 09:55:31 | INFO  | Prepare task for execution of octavia. 2026-04-16 09:55:31.546837 | orchestrator | 2026-04-16 09:55:31 | INFO  | Task 8427cb31-a817-4120-bd35-c4468954ce89 (octavia) was prepared for execution. 2026-04-16 09:55:31.547436 | orchestrator | 2026-04-16 09:55:31 | INFO  | It takes a moment until task 8427cb31-a817-4120-bd35-c4468954ce89 (octavia) has been started and output is visible here. 2026-04-16 09:56:20.130310 | orchestrator | 2026-04-16 09:56:20.130424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:56:20.130440 | orchestrator | 2026-04-16 09:56:20.130451 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:56:20.130461 | orchestrator | Thursday 16 April 2026 09:55:36 +0000 (0:00:01.502) 0:00:01.502 ******** 2026-04-16 09:56:20.130472 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:56:20.130483 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:56:20.130493 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:56:20.130503 | orchestrator | 2026-04-16 09:56:20.130514 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:56:20.130524 | orchestrator | Thursday 16 April 2026 09:55:38 +0000 (0:00:01.906) 0:00:03.409 ******** 2026-04-16 09:56:20.130534 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-16 09:56:20.130545 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-16 09:56:20.130555 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-16 09:56:20.130565 | orchestrator | 2026-04-16 09:56:20.130575 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-16 09:56:20.130585 | orchestrator | 2026-04-16 09:56:20.130595 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 09:56:20.130605 | orchestrator | Thursday 16 April 2026 09:55:40 +0000 (0:00:02.237) 0:00:05.647 ******** 2026-04-16 09:56:20.130615 | orchestrator | included: /ansible/roles/octavia/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:56:20.130626 | orchestrator | 2026-04-16 09:56:20.130636 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-04-16 09:56:20.130646 | orchestrator | Thursday 16 April 2026 09:55:43 +0000 (0:00:02.822) 0:00:08.469 ******** 2026-04-16 09:56:20.130656 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:56:20.130666 | orchestrator | 2026-04-16 09:56:20.130740 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-04-16 09:56:20.130750 | orchestrator | Thursday 16 April 2026 09:55:46 +0000 (0:00:03.704) 0:00:12.174 ******** 2026-04-16 09:56:20.130760 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:56:20.130769 | orchestrator | 2026-04-16 09:56:20.130779 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-04-16 09:56:20.130789 | orchestrator | Thursday 16 April 2026 09:55:49 +0000 (0:00:03.047) 0:00:15.221 ******** 2026-04-16 09:56:20.130798 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:56:20.130808 | orchestrator | 2026-04-16 09:56:20.130818 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-04-16 09:56:20.130828 | orchestrator | Thursday 16 April 2026 09:55:53 +0000 (0:00:03.238) 0:00:18.459 ******** 2026-04-16 09:56:20.130838 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:56:20.130847 | orchestrator | 2026-04-16 09:56:20.130858 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-16 09:56:20.130869 | orchestrator | Thursday 16 April 2026 09:55:56 +0000 (0:00:03.680) 0:00:22.140 ******** 2026-04-16 09:56:20.130880 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:56:20.130892 | orchestrator | 2026-04-16 09:56:20.130903 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:56:20.130915 | orchestrator | testbed-node-0 : ok=8  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:56:20.130927 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:56:20.130939 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:56:20.130974 | orchestrator | 2026-04-16 09:56:20.130984 | orchestrator | 2026-04-16 09:56:20.130993 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:56:20.131003 | orchestrator | Thursday 16 April 2026 09:56:19 +0000 (0:00:22.950) 0:00:45.090 ******** 2026-04-16 09:56:20.131013 | orchestrator | =============================================================================== 2026-04-16 09:56:20.131022 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.95s 2026-04-16 09:56:20.131032 | orchestrator | octavia : Creating Octavia database ------------------------------------- 3.70s 2026-04-16 09:56:20.131041 | orchestrator | octavia : Creating Octavia persistence database user and setting permissions --- 3.68s 2026-04-16 09:56:20.131050 | orchestrator | octavia : Creating Octavia database user and setting permissions -------- 3.24s 2026-04-16 09:56:20.131060 | orchestrator | octavia : Creating Octavia persistence database ------------------------- 3.05s 2026-04-16 09:56:20.131069 | orchestrator | octavia : include_tasks ------------------------------------------------- 2.82s 2026-04-16 09:56:20.131079 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.24s 2026-04-16 09:56:20.131089 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.91s 2026-04-16 09:56:20.307319 | orchestrator | + osism apply -a upgrade octavia 2026-04-16 09:56:21.534991 | orchestrator | 2026-04-16 09:56:21 | INFO  | Prepare task for execution of octavia. 2026-04-16 09:56:21.601461 | orchestrator | 2026-04-16 09:56:21 | INFO  | Task 7cc5417a-18f0-47e1-b5f2-f5e41862a780 (octavia) was prepared for execution. 2026-04-16 09:56:21.601577 | orchestrator | 2026-04-16 09:56:21 | INFO  | It takes a moment until task 7cc5417a-18f0-47e1-b5f2-f5e41862a780 (octavia) has been started and output is visible here. 2026-04-16 09:57:01.360137 | orchestrator | 2026-04-16 09:57:01.360241 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:57:01.360256 | orchestrator | 2026-04-16 09:57:01.360270 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:57:01.360284 | orchestrator | Thursday 16 April 2026 09:56:26 +0000 (0:00:01.861) 0:00:01.861 ******** 2026-04-16 09:57:01.360304 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:57:01.360320 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:57:01.360334 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:57:01.360359 | orchestrator | 2026-04-16 09:57:01.360372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:57:01.360385 | orchestrator | Thursday 16 April 2026 09:56:28 +0000 (0:00:01.709) 0:00:03.571 ******** 2026-04-16 09:57:01.360399 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-04-16 09:57:01.360412 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-04-16 09:57:01.360424 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-04-16 09:57:01.360437 | orchestrator | 2026-04-16 09:57:01.360451 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-04-16 09:57:01.360465 | orchestrator | 2026-04-16 09:57:01.360478 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 09:57:01.360491 | orchestrator | Thursday 16 April 2026 09:56:30 +0000 (0:00:01.877) 0:00:05.449 ******** 2026-04-16 09:57:01.360504 | orchestrator | included: /ansible/roles/octavia/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:57:01.360519 | orchestrator | 2026-04-16 09:57:01.360534 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 09:57:01.360548 | orchestrator | Thursday 16 April 2026 09:56:32 +0000 (0:00:01.875) 0:00:07.325 ******** 2026-04-16 09:57:01.360562 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:57:01.360576 | orchestrator | 2026-04-16 09:57:01.360589 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-04-16 09:57:01.360602 | orchestrator | Thursday 16 April 2026 09:56:35 +0000 (0:00:02.901) 0:00:10.226 ******** 2026-04-16 09:57:01.360643 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:57:01.360685 | orchestrator | 2026-04-16 09:57:01.360696 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-04-16 09:57:01.360706 | orchestrator | Thursday 16 April 2026 09:56:40 +0000 (0:00:05.846) 0:00:16.073 ******** 2026-04-16 09:57:01.360716 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:57:01.360725 | orchestrator | 2026-04-16 09:57:01.360735 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-04-16 09:57:01.360762 | orchestrator | Thursday 16 April 2026 09:56:45 +0000 (0:00:04.334) 0:00:20.408 ******** 2026-04-16 09:57:01.360772 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-04-16 09:57:01.360791 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-04-16 09:57:01.360801 | orchestrator | 2026-04-16 09:57:01.360809 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-04-16 09:57:01.360817 | orchestrator | Thursday 16 April 2026 09:56:53 +0000 (0:00:08.207) 0:00:28.615 ******** 2026-04-16 09:57:01.360825 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:57:01.360833 | orchestrator | 2026-04-16 09:57:01.360841 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-04-16 09:57:01.360849 | orchestrator | Thursday 16 April 2026 09:56:58 +0000 (0:00:04.701) 0:00:33.317 ******** 2026-04-16 09:57:01.360857 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:57:01.360865 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:57:01.360873 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:57:01.360880 | orchestrator | 2026-04-16 09:57:01.360889 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-04-16 09:57:01.360896 | orchestrator | Thursday 16 April 2026 09:56:59 +0000 (0:00:01.335) 0:00:34.653 ******** 2026-04-16 09:57:01.360908 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:01.360950 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:01.360961 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:01.360979 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:01.360987 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:01.360996 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:01.361005 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:01.361026 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992235 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992252 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992265 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992277 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992303 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992336 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:05.992361 | orchestrator | 2026-04-16 09:57:05.992375 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-04-16 09:57:05.992387 | orchestrator | Thursday 16 April 2026 09:57:03 +0000 (0:00:03.745) 0:00:38.399 ******** 2026-04-16 09:57:05.992399 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:57:05.992410 | orchestrator | 2026-04-16 09:57:05.992422 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-04-16 09:57:05.992433 | orchestrator | Thursday 16 April 2026 09:57:04 +0000 (0:00:01.090) 0:00:39.489 ******** 2026-04-16 09:57:05.992444 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:57:05.992455 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:57:05.992466 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:57:05.992477 | orchestrator | 2026-04-16 09:57:05.992488 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-04-16 09:57:05.992499 | orchestrator | Thursday 16 April 2026 09:57:05 +0000 (0:00:01.316) 0:00:40.805 ******** 2026-04-16 09:57:05.992511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:05.992527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:05.992540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:05.992557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:05.992583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:10.407356 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:57:10.407435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:10.407447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:10.407454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:10.407460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:10.407477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:10.407497 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:57:10.407515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:10.407521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:10.407526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:10.407531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:10.407536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:10.407541 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:57:10.407550 | orchestrator | 2026-04-16 09:57:10.407556 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-04-16 09:57:10.407562 | orchestrator | Thursday 16 April 2026 09:57:07 +0000 (0:00:01.675) 0:00:42.481 ******** 2026-04-16 09:57:10.407567 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 09:57:10.407572 | orchestrator | 2026-04-16 09:57:10.407576 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-04-16 09:57:10.407581 | orchestrator | Thursday 16 April 2026 09:57:08 +0000 (0:00:01.650) 0:00:44.131 ******** 2026-04-16 09:57:10.407593 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:13.850747 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:13.850864 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:13.850891 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:13.850945 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:13.850986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:13.851034 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851058 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851079 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851100 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851122 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851163 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851184 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:13.851207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:15.686335 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:15.686448 | orchestrator | 2026-04-16 09:57:15.686466 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-04-16 09:57:15.686480 | orchestrator | Thursday 16 April 2026 09:57:15 +0000 (0:00:06.094) 0:00:50.226 ******** 2026-04-16 09:57:15.686495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:15.686535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:15.686564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:15.686577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:15.686609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:15.686622 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:57:15.686635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:15.686722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:15.686758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:15.686786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:15.686799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:15.686810 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:57:15.686832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:17.321226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:17.321355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:17.321376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:17.321405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:17.321419 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:57:17.321433 | orchestrator | 2026-04-16 09:57:17.321444 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-04-16 09:57:17.321456 | orchestrator | Thursday 16 April 2026 09:57:16 +0000 (0:00:01.766) 0:00:51.993 ******** 2026-04-16 09:57:17.321467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:17.321501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:17.321515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:17.321536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:17.321549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:17.321564 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:57:17.321576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:17.321587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:17.321605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:21.091914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:21.092031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:21.092068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:57:21.092087 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:57:21.092102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:57:21.092116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:57:21.092148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:57:21.092183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:57:21.092195 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:57:21.092207 | orchestrator | 2026-04-16 09:57:21.092220 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-04-16 09:57:21.092232 | orchestrator | Thursday 16 April 2026 09:57:18 +0000 (0:00:01.757) 0:00:53.750 ******** 2026-04-16 09:57:21.092244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:21.092263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:21.092276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:21.092305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:31.230814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:31.230934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:31.230965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:31.230987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:31.230998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:31.231108 | orchestrator | 2026-04-16 09:57:31.231119 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-04-16 09:57:31.231136 | orchestrator | Thursday 16 April 2026 09:57:25 +0000 (0:00:06.544) 0:01:00.295 ******** 2026-04-16 09:57:31.231145 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-16 09:57:31.231158 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-16 09:57:31.231174 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-04-16 09:57:31.231219 | orchestrator | 2026-04-16 09:57:31.231230 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-04-16 09:57:31.231239 | orchestrator | Thursday 16 April 2026 09:57:27 +0000 (0:00:02.604) 0:01:02.900 ******** 2026-04-16 09:57:31.231255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:44.444676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:44.444822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:57:44.444841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:44.444877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:44.444890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:57:44.444924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:44.444947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:44.444972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:44.444990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:44.445019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:44.445041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:57:44.445062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:57:44.445095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:58:09.422665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:58:09.422778 | orchestrator | 2026-04-16 09:58:09.422794 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-04-16 09:58:09.422806 | orchestrator | Thursday 16 April 2026 09:57:45 +0000 (0:00:18.141) 0:01:21.042 ******** 2026-04-16 09:58:09.422831 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:58:09.422841 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:58:09.422850 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:58:09.422858 | orchestrator | 2026-04-16 09:58:09.422867 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-04-16 09:58:09.422874 | orchestrator | Thursday 16 April 2026 09:57:48 +0000 (0:00:02.689) 0:01:23.732 ******** 2026-04-16 09:58:09.422884 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.422912 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.422933 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.422949 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.422959 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.422967 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.422975 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.422982 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.422991 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.422999 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423006 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423014 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423021 | orchestrator | 2026-04-16 09:58:09.423030 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-04-16 09:58:09.423039 | orchestrator | Thursday 16 April 2026 09:57:54 +0000 (0:00:05.817) 0:01:29.549 ******** 2026-04-16 09:58:09.423047 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.423055 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.423063 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.423071 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.423079 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.423087 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.423095 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.423103 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.423111 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.423119 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423126 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423133 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423141 | orchestrator | 2026-04-16 09:58:09.423150 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-04-16 09:58:09.423157 | orchestrator | Thursday 16 April 2026 09:58:00 +0000 (0:00:06.177) 0:01:35.727 ******** 2026-04-16 09:58:09.423164 | orchestrator | ok: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.423172 | orchestrator | ok: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.423180 | orchestrator | ok: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-04-16 09:58:09.423188 | orchestrator | ok: [testbed-node-0] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.423196 | orchestrator | ok: [testbed-node-2] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.423204 | orchestrator | ok: [testbed-node-1] => (item=client_ca.cert.pem) 2026-04-16 09:58:09.423212 | orchestrator | ok: [testbed-node-0] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.423220 | orchestrator | ok: [testbed-node-1] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.423228 | orchestrator | ok: [testbed-node-2] => (item=server_ca.cert.pem) 2026-04-16 09:58:09.423236 | orchestrator | ok: [testbed-node-0] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423243 | orchestrator | ok: [testbed-node-2] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423252 | orchestrator | ok: [testbed-node-1] => (item=server_ca.key.pem) 2026-04-16 09:58:09.423260 | orchestrator | 2026-04-16 09:58:09.423269 | orchestrator | TASK [service-check-containers : octavia | Check containers] ******************* 2026-04-16 09:58:09.423280 | orchestrator | Thursday 16 April 2026 09:58:07 +0000 (0:00:06.495) 0:01:42.223 ******** 2026-04-16 09:58:09.423311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:58:09.423344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:58:09.423353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-04-16 09:58:09.423362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:58:09.423372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:58:09.423388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-04-16 09:58:14.773862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:58:14.773993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-04-16 09:58:14.774234 | orchestrator | 2026-04-16 09:58:14.774248 | orchestrator | TASK [service-check-containers : octavia | Notify handlers to restart containers] *** 2026-04-16 09:58:14.774260 | orchestrator | Thursday 16 April 2026 09:58:13 +0000 (0:00:06.064) 0:01:48.287 ******** 2026-04-16 09:58:14.774273 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 09:58:14.774285 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:58:14.774297 | orchestrator | } 2026-04-16 09:58:14.774308 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 09:58:14.774319 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:58:14.774330 | orchestrator | } 2026-04-16 09:58:14.774340 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 09:58:14.774351 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 09:58:14.774362 | orchestrator | } 2026-04-16 09:58:14.774373 | orchestrator | 2026-04-16 09:58:14.774383 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 09:58:14.774395 | orchestrator | Thursday 16 April 2026 09:58:14 +0000 (0:00:01.329) 0:01:49.616 ******** 2026-04-16 09:58:14.774407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:58:14.774432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:58:14.774453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:58:15.046953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:58:15.047067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:58:15.047095 | orchestrator | skipping: [testbed-node-0] 2026-04-16 09:58:15.047118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:58:15.047170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:58:15.047184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:58:15.047215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:58:15.047235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:58:15.047246 | orchestrator | skipping: [testbed-node-1] 2026-04-16 09:58:15.047256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-04-16 09:58:15.047267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-04-16 09:58:15.047284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-04-16 09:58:15.047295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-04-16 09:58:15.047312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-04-16 09:59:46.275154 | orchestrator | skipping: [testbed-node-2] 2026-04-16 09:59:46.275257 | orchestrator | 2026-04-16 09:59:46.275273 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-04-16 09:59:46.275299 | orchestrator | Thursday 16 April 2026 09:58:16 +0000 (0:00:02.174) 0:01:51.791 ******** 2026-04-16 09:59:46.275311 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:59:46.275323 | orchestrator | 2026-04-16 09:59:46.275334 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-16 09:59:46.275345 | orchestrator | Thursday 16 April 2026 09:58:30 +0000 (0:00:13.589) 0:02:05.380 ******** 2026-04-16 09:59:46.275356 | orchestrator | 2026-04-16 09:59:46.275367 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-16 09:59:46.275378 | orchestrator | Thursday 16 April 2026 09:58:30 +0000 (0:00:00.430) 0:02:05.811 ******** 2026-04-16 09:59:46.275389 | orchestrator | 2026-04-16 09:59:46.275400 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-04-16 09:59:46.275411 | orchestrator | Thursday 16 April 2026 09:58:31 +0000 (0:00:00.433) 0:02:06.244 ******** 2026-04-16 09:59:46.275422 | orchestrator | 2026-04-16 09:59:46.275433 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-04-16 09:59:46.275453 | orchestrator | Thursday 16 April 2026 09:58:31 +0000 (0:00:00.772) 0:02:07.016 ******** 2026-04-16 09:59:46.275470 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:59:46.275489 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:59:46.275508 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:59:46.275528 | orchestrator | 2026-04-16 09:59:46.275547 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-04-16 09:59:46.275566 | orchestrator | Thursday 16 April 2026 09:58:51 +0000 (0:00:20.134) 0:02:27.151 ******** 2026-04-16 09:59:46.275633 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:59:46.275667 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:59:46.275682 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:59:46.275704 | orchestrator | 2026-04-16 09:59:46.275726 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-04-16 09:59:46.275748 | orchestrator | Thursday 16 April 2026 09:59:06 +0000 (0:00:14.381) 0:02:41.532 ******** 2026-04-16 09:59:46.275768 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:59:46.275781 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:59:46.275793 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:59:46.275805 | orchestrator | 2026-04-16 09:59:46.275817 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-04-16 09:59:46.275829 | orchestrator | Thursday 16 April 2026 09:59:19 +0000 (0:00:12.968) 0:02:54.501 ******** 2026-04-16 09:59:46.275842 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:59:46.275856 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:59:46.275869 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:59:46.275881 | orchestrator | 2026-04-16 09:59:46.275893 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-04-16 09:59:46.275906 | orchestrator | Thursday 16 April 2026 09:59:32 +0000 (0:00:12.982) 0:03:07.483 ******** 2026-04-16 09:59:46.275918 | orchestrator | changed: [testbed-node-0] 2026-04-16 09:59:46.275931 | orchestrator | changed: [testbed-node-1] 2026-04-16 09:59:46.275943 | orchestrator | changed: [testbed-node-2] 2026-04-16 09:59:46.275955 | orchestrator | 2026-04-16 09:59:46.275968 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:59:46.275981 | orchestrator | testbed-node-0 : ok=27  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 09:59:46.275994 | orchestrator | testbed-node-1 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 09:59:46.276006 | orchestrator | testbed-node-2 : ok=22  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 09:59:46.276019 | orchestrator | 2026-04-16 09:59:46.276032 | orchestrator | 2026-04-16 09:59:46.276044 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:59:46.276057 | orchestrator | Thursday 16 April 2026 09:59:45 +0000 (0:00:13.616) 0:03:21.100 ******** 2026-04-16 09:59:46.276068 | orchestrator | =============================================================================== 2026-04-16 09:59:46.276079 | orchestrator | octavia : Restart octavia-api container -------------------------------- 20.13s 2026-04-16 09:59:46.276090 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 18.14s 2026-04-16 09:59:46.276104 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 14.38s 2026-04-16 09:59:46.276123 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 13.62s 2026-04-16 09:59:46.276142 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 13.59s 2026-04-16 09:59:46.276158 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 12.98s 2026-04-16 09:59:46.276175 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 12.97s 2026-04-16 09:59:46.276190 | orchestrator | octavia : Get security groups for octavia ------------------------------- 8.21s 2026-04-16 09:59:46.276207 | orchestrator | octavia : Copying over config.json files for services ------------------- 6.54s 2026-04-16 09:59:46.276224 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.50s 2026-04-16 09:59:46.276243 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.18s 2026-04-16 09:59:46.276261 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 6.09s 2026-04-16 09:59:46.276279 | orchestrator | service-check-containers : octavia | Check containers ------------------- 6.06s 2026-04-16 09:59:46.276296 | orchestrator | octavia : Get amphora flavor info --------------------------------------- 5.85s 2026-04-16 09:59:46.276351 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.82s 2026-04-16 09:59:46.276364 | orchestrator | octavia : Get loadbalancer management network --------------------------- 4.70s 2026-04-16 09:59:46.276375 | orchestrator | octavia : Get service project id ---------------------------------------- 4.33s 2026-04-16 09:59:46.276393 | orchestrator | octavia : Ensuring config directories exist ----------------------------- 3.75s 2026-04-16 09:59:46.276404 | orchestrator | octavia : include_tasks ------------------------------------------------- 2.90s 2026-04-16 09:59:46.276462 | orchestrator | octavia : Copying over Octavia SSH key ---------------------------------- 2.69s 2026-04-16 09:59:46.435303 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-16 09:59:46.435375 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/310-openstack-extended.sh 2026-04-16 09:59:47.677988 | orchestrator | 2026-04-16 09:59:47 | INFO  | Prepare task for execution of gnocchi. 2026-04-16 09:59:47.738464 | orchestrator | 2026-04-16 09:59:47 | INFO  | Task 3108c588-e0ca-49d4-8c78-d80a0eb6fd67 (gnocchi) was prepared for execution. 2026-04-16 09:59:47.738563 | orchestrator | 2026-04-16 09:59:47 | INFO  | It takes a moment until task 3108c588-e0ca-49d4-8c78-d80a0eb6fd67 (gnocchi) has been started and output is visible here. 2026-04-16 09:59:58.000630 | orchestrator | 2026-04-16 09:59:58.000724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 09:59:58.000735 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 09:59:58.000743 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 09:59:58.000757 | orchestrator | 2026-04-16 09:59:58.000763 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 09:59:58.000770 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 09:59:58.000776 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 09:59:58.000790 | orchestrator | Thursday 16 April 2026 09:59:52 +0000 (0:00:01.164) 0:00:01.164 ******** 2026-04-16 09:59:58.000797 | orchestrator | ok: [testbed-node-0] 2026-04-16 09:59:58.000824 | orchestrator | ok: [testbed-node-1] 2026-04-16 09:59:58.000831 | orchestrator | ok: [testbed-node-2] 2026-04-16 09:59:58.000838 | orchestrator | 2026-04-16 09:59:58.000844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 09:59:58.000850 | orchestrator | Thursday 16 April 2026 09:59:53 +0000 (0:00:01.001) 0:00:02.165 ******** 2026-04-16 09:59:58.000857 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-04-16 09:59:58.000864 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-04-16 09:59:58.000872 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-04-16 09:59:58.000883 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-04-16 09:59:58.000894 | orchestrator | 2026-04-16 09:59:58.000905 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-04-16 09:59:58.000916 | orchestrator | skipping: no hosts matched 2026-04-16 09:59:58.000926 | orchestrator | 2026-04-16 09:59:58.000937 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 09:59:58.000948 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:59:58.000960 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:59:58.000971 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-04-16 09:59:58.000995 | orchestrator | 2026-04-16 09:59:58.001006 | orchestrator | 2026-04-16 09:59:58.001041 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 09:59:58.001050 | orchestrator | Thursday 16 April 2026 09:59:57 +0000 (0:00:04.645) 0:00:06.811 ******** 2026-04-16 09:59:58.001056 | orchestrator | =============================================================================== 2026-04-16 09:59:58.001062 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.65s 2026-04-16 09:59:58.001069 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.00s 2026-04-16 09:59:59.189430 | orchestrator | 2026-04-16 09:59:59 | INFO  | Prepare task for execution of manila. 2026-04-16 09:59:59.244812 | orchestrator | 2026-04-16 09:59:59 | INFO  | Task b3ad02d0-b9ab-4d14-a90b-7ab350a1417b (manila) was prepared for execution. 2026-04-16 09:59:59.244878 | orchestrator | 2026-04-16 09:59:59 | INFO  | It takes a moment until task b3ad02d0-b9ab-4d14-a90b-7ab350a1417b (manila) has been started and output is visible here. 2026-04-16 10:00:07.861274 | orchestrator | 2026-04-16 10:00:07.861396 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 10:00:07.861415 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 10:00:07.861428 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 10:00:07.861451 | orchestrator | 2026-04-16 10:00:07.861463 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 10:00:07.861474 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 10:00:07.861486 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 10:00:07.861523 | orchestrator | Thursday 16 April 2026 10:00:02 +0000 (0:00:01.041) 0:00:01.041 ******** 2026-04-16 10:00:07.861535 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:00:07.861547 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:00:07.861558 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:00:07.861637 | orchestrator | 2026-04-16 10:00:07.861652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 10:00:07.861664 | orchestrator | Thursday 16 April 2026 10:00:03 +0000 (0:00:00.833) 0:00:01.875 ******** 2026-04-16 10:00:07.861675 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-04-16 10:00:07.861686 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-04-16 10:00:07.861697 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-04-16 10:00:07.861708 | orchestrator | 2026-04-16 10:00:07.861719 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-04-16 10:00:07.861730 | orchestrator | 2026-04-16 10:00:07.861741 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 10:00:07.861752 | orchestrator | Thursday 16 April 2026 10:00:04 +0000 (0:00:00.755) 0:00:02.630 ******** 2026-04-16 10:00:07.861764 | orchestrator | included: /ansible/roles/manila/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:00:07.861778 | orchestrator | 2026-04-16 10:00:07.861791 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-04-16 10:00:07.861803 | orchestrator | Thursday 16 April 2026 10:00:05 +0000 (0:00:01.092) 0:00:03.723 ******** 2026-04-16 10:00:07.861819 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:07.861862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:07.861898 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:07.861920 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:07.861934 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:07.861947 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:07.861971 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:07.861984 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:07.862003 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:16.844850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:16.844969 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:16.844986 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:16.845020 | orchestrator | 2026-04-16 10:00:16.845034 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 10:00:16.845047 | orchestrator | Thursday 16 April 2026 10:00:08 +0000 (0:00:02.626) 0:00:06.349 ******** 2026-04-16 10:00:16.845058 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:00:16.845070 | orchestrator | 2026-04-16 10:00:16.845081 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-04-16 10:00:16.845091 | orchestrator | Thursday 16 April 2026 10:00:09 +0000 (0:00:01.267) 0:00:07.617 ******** 2026-04-16 10:00:16.845102 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:00:16.845114 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:00:16.845125 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:00:16.845135 | orchestrator | 2026-04-16 10:00:16.845146 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-04-16 10:00:16.845157 | orchestrator | Thursday 16 April 2026 10:00:10 +0000 (0:00:00.992) 0:00:08.611 ******** 2026-04-16 10:00:16.845185 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 10:00:16.845199 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 10:00:16.845222 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 10:00:16.845233 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 10:00:16.845244 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 10:00:16.845256 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 10:00:16.845275 | orchestrator | 2026-04-16 10:00:16.845302 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-04-16 10:00:16.845325 | orchestrator | Thursday 16 April 2026 10:00:11 +0000 (0:00:01.416) 0:00:10.028 ******** 2026-04-16 10:00:16.845343 | orchestrator | ok: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 10:00:16.845387 | orchestrator | ok: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 10:00:16.845407 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 10:00:16.845436 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 10:00:16.845457 | orchestrator | ok: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-04-16 10:00:16.845476 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-04-16 10:00:16.845509 | orchestrator | 2026-04-16 10:00:16.845528 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-04-16 10:00:16.845547 | orchestrator | Thursday 16 April 2026 10:00:13 +0000 (0:00:01.175) 0:00:11.204 ******** 2026-04-16 10:00:16.845631 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-04-16 10:00:16.845655 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-04-16 10:00:16.845674 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-04-16 10:00:16.845693 | orchestrator | 2026-04-16 10:00:16.845712 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-04-16 10:00:16.845729 | orchestrator | Thursday 16 April 2026 10:00:13 +0000 (0:00:00.892) 0:00:12.096 ******** 2026-04-16 10:00:16.845747 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:00:16.845759 | orchestrator | 2026-04-16 10:00:16.845769 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-04-16 10:00:16.845780 | orchestrator | Thursday 16 April 2026 10:00:14 +0000 (0:00:00.128) 0:00:12.224 ******** 2026-04-16 10:00:16.845791 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:00:16.845801 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:00:16.845812 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:00:16.845823 | orchestrator | 2026-04-16 10:00:16.845834 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-04-16 10:00:16.845844 | orchestrator | Thursday 16 April 2026 10:00:14 +0000 (0:00:00.302) 0:00:12.526 ******** 2026-04-16 10:00:16.845855 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:00:16.845866 | orchestrator | 2026-04-16 10:00:16.845877 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-04-16 10:00:16.845887 | orchestrator | Thursday 16 April 2026 10:00:15 +0000 (0:00:01.064) 0:00:13.591 ******** 2026-04-16 10:00:16.845900 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:16.845914 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:16.845946 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:20.027949 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028027 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028032 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028038 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028045 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028075 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028096 | orchestrator | ok: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028100 | orchestrator | ok: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:20.028104 | orchestrator | 2026-04-16 10:00:20.028109 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-04-16 10:00:20.028114 | orchestrator | Thursday 16 April 2026 10:00:19 +0000 (0:00:04.022) 0:00:17.614 ******** 2026-04-16 10:00:20.028122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:20.028131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.028142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:20.924334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:20.924459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924511 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:00:20.924540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924759 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:00:20.924780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924802 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:00:20.924814 | orchestrator | 2026-04-16 10:00:20.924826 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-04-16 10:00:20.924838 | orchestrator | Thursday 16 April 2026 10:00:20 +0000 (0:00:00.930) 0:00:18.545 ******** 2026-04-16 10:00:20.924857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:20.924872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:20.924897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:23.211876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:23.212100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212114 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:00:23.212129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212187 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:00:23.212208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:23.212232 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:00:23.212243 | orchestrator | 2026-04-16 10:00:23.212256 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-04-16 10:00:23.212268 | orchestrator | Thursday 16 April 2026 10:00:21 +0000 (0:00:01.463) 0:00:20.008 ******** 2026-04-16 10:00:23.212286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:23.212307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:28.683285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:28.683428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:28.683625 | orchestrator | 2026-04-16 10:00:28.683638 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-04-16 10:00:28.683651 | orchestrator | Thursday 16 April 2026 10:00:26 +0000 (0:00:04.194) 0:00:24.203 ******** 2026-04-16 10:00:28.683663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:28.683683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:36.832868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:36.833036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:36.833058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:36.833072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:36.833084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:36.833141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:36.833154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:36.833167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:36.833184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:36.833197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:36.833209 | orchestrator | 2026-04-16 10:00:36.833223 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-04-16 10:00:36.833236 | orchestrator | Thursday 16 April 2026 10:00:32 +0000 (0:00:06.663) 0:00:30.867 ******** 2026-04-16 10:00:36.833247 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-04-16 10:00:36.833259 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-04-16 10:00:36.833278 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-04-16 10:00:36.833311 | orchestrator | 2026-04-16 10:00:36.833333 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-04-16 10:00:36.833345 | orchestrator | Thursday 16 April 2026 10:00:36 +0000 (0:00:03.587) 0:00:34.454 ******** 2026-04-16 10:00:36.833364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:38.909043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909264 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:00:38.909421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:38.909470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909531 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:00:38.909548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:38.909595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:38.909649 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:00:38.909662 | orchestrator | 2026-04-16 10:00:38.909675 | orchestrator | TASK [service-check-containers : manila | Check containers] ******************** 2026-04-16 10:00:38.909690 | orchestrator | Thursday 16 April 2026 10:00:37 +0000 (0:00:01.213) 0:00:35.668 ******** 2026-04-16 10:00:38.909713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:42.031379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:42.031467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:00:42.031495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-04-16 10:00:42.031626 | orchestrator | 2026-04-16 10:00:42.031634 | orchestrator | TASK [service-check-containers : manila | Notify handlers to restart containers] *** 2026-04-16 10:00:42.031642 | orchestrator | Thursday 16 April 2026 10:00:41 +0000 (0:00:04.230) 0:00:39.898 ******** 2026-04-16 10:00:42.031649 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 10:00:42.031657 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:00:42.031663 | orchestrator | } 2026-04-16 10:00:42.031670 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 10:00:42.031676 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:00:42.031682 | orchestrator | } 2026-04-16 10:00:42.031690 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 10:00:42.031702 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:00:42.855003 | orchestrator | } 2026-04-16 10:00:42.855117 | orchestrator | 2026-04-16 10:00:42.855149 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 10:00:42.855177 | orchestrator | Thursday 16 April 2026 10:00:42 +0000 (0:00:00.322) 0:00:40.220 ******** 2026-04-16 10:00:42.855222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:42.855275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855340 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:00:42.855383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:42.855411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855456 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:00:42.855467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:00:42.855478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-04-16 10:00:42.855498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-04-16 10:04:04.802448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-04-16 10:04:04.802597 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:04:04.802611 | orchestrator | 2026-04-16 10:04:04.802620 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-04-16 10:04:04.802630 | orchestrator | Thursday 16 April 2026 10:00:43 +0000 (0:00:01.586) 0:00:41.807 ******** 2026-04-16 10:04:04.802637 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:04:04.802645 | orchestrator | 2026-04-16 10:04:04.802653 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-16 10:04:04.802660 | orchestrator | Thursday 16 April 2026 10:01:03 +0000 (0:00:19.793) 0:01:01.601 ******** 2026-04-16 10:04:04.802668 | orchestrator | 2026-04-16 10:04:04.802675 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-16 10:04:04.802682 | orchestrator | Thursday 16 April 2026 10:01:03 +0000 (0:00:00.072) 0:01:01.673 ******** 2026-04-16 10:04:04.802730 | orchestrator | 2026-04-16 10:04:04.802737 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-04-16 10:04:04.802745 | orchestrator | Thursday 16 April 2026 10:01:03 +0000 (0:00:00.071) 0:01:01.745 ******** 2026-04-16 10:04:04.802752 | orchestrator | 2026-04-16 10:04:04.802759 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-04-16 10:04:04.802767 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-04-16 10:04:04.802775 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-04-16 10:04:04.802790 | orchestrator | Thursday 16 April 2026 10:01:03 +0000 (0:00:00.070) 0:01:01.815 ******** 2026-04-16 10:04:04.802797 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:04:04.802805 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:04:04.802812 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:04:04.802820 | orchestrator | 2026-04-16 10:04:04.802827 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-04-16 10:04:04.802834 | orchestrator | Thursday 16 April 2026 10:01:20 +0000 (0:00:16.473) 0:01:18.289 ******** 2026-04-16 10:04:04.802842 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:04:04.802849 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:04:04.802857 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:04:04.802864 | orchestrator | 2026-04-16 10:04:04.802872 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-04-16 10:04:04.802879 | orchestrator | Thursday 16 April 2026 10:01:32 +0000 (0:00:12.327) 0:01:30.617 ******** 2026-04-16 10:04:04.802886 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:04:04.802894 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:04:04.802901 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:04:04.802909 | orchestrator | 2026-04-16 10:04:04.802916 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-04-16 10:04:04.802924 | orchestrator | Thursday 16 April 2026 10:01:44 +0000 (0:00:11.913) 0:01:42.530 ******** 2026-04-16 10:04:04.802932 | orchestrator | 2026-04-16 10:04:04.802939 | orchestrator | STILL ALIVE [task 'manila : Restart manila-share container' is running] ******** 2026-04-16 10:04:04.802947 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:04:04.802954 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:04:04.802962 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:04:04.802970 | orchestrator | 2026-04-16 10:04:04.802979 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:04:04.803011 | orchestrator | testbed-node-0 : ok=21  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 10:04:04.803023 | orchestrator | testbed-node-1 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 10:04:04.803031 | orchestrator | testbed-node-2 : ok=20  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 10:04:04.803040 | orchestrator | 2026-04-16 10:04:04.803049 | orchestrator | 2026-04-16 10:04:04.803058 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:04:04.803067 | orchestrator | Thursday 16 April 2026 10:04:04 +0000 (0:02:20.103) 0:04:02.634 ******** 2026-04-16 10:04:04.803079 | orchestrator | =============================================================================== 2026-04-16 10:04:04.803092 | orchestrator | manila : Restart manila-share container ------------------------------- 140.10s 2026-04-16 10:04:04.803103 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 19.79s 2026-04-16 10:04:04.803114 | orchestrator | manila : Restart manila-api container ---------------------------------- 16.47s 2026-04-16 10:04:04.803126 | orchestrator | manila : Restart manila-data container --------------------------------- 12.33s 2026-04-16 10:04:04.803139 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 11.91s 2026-04-16 10:04:04.803152 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.66s 2026-04-16 10:04:04.803185 | orchestrator | service-check-containers : manila | Check containers -------------------- 4.23s 2026-04-16 10:04:04.803194 | orchestrator | manila : Copying over config.json files for services -------------------- 4.20s 2026-04-16 10:04:04.803201 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.02s 2026-04-16 10:04:04.803209 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.59s 2026-04-16 10:04:04.803216 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.63s 2026-04-16 10:04:04.803231 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.59s 2026-04-16 10:04:04.803238 | orchestrator | service-cert-copy : manila | Copying over backend internal TLS key ------ 1.46s 2026-04-16 10:04:04.803246 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.42s 2026-04-16 10:04:04.803253 | orchestrator | manila : include_tasks -------------------------------------------------- 1.27s 2026-04-16 10:04:04.803260 | orchestrator | manila : Copying over existing policy file ------------------------------ 1.21s 2026-04-16 10:04:04.803268 | orchestrator | manila : Copy over ceph Manila keyrings --------------------------------- 1.18s 2026-04-16 10:04:04.803275 | orchestrator | manila : include_tasks -------------------------------------------------- 1.09s 2026-04-16 10:04:04.803283 | orchestrator | manila : include_tasks -------------------------------------------------- 1.07s 2026-04-16 10:04:04.803295 | orchestrator | manila : Ensuring manila service ceph config subdir exists -------------- 0.99s 2026-04-16 10:04:04.917007 | orchestrator | + [[ true == \t\r\u\e ]] 2026-04-16 10:04:04.917087 | orchestrator | + osism migrate rabbitmq3to4 delete 2026-04-16 10:04:09.874737 | orchestrator | 2026-04-16 10:04:09 | ERROR  | Unable to get ansible vault password 2026-04-16 10:04:09.874850 | orchestrator | 2026-04-16 10:04:09 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 10:04:09.874868 | orchestrator | 2026-04-16 10:04:09 | ERROR  | Dropping encrypted entries 2026-04-16 10:04:09.901658 | orchestrator | 2026-04-16 10:04:09 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 10:04:10.181099 | orchestrator | 2026-04-16 10:04:10 | INFO  | Found 127 classic queue(s) in vhost '/' 2026-04-16 10:04:10.239682 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: alarm.all.sample 2026-04-16 10:04:10.304284 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: alarming.sample 2026-04-16 10:04:10.346694 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: barbican.workers 2026-04-16 10:04:10.396065 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: barbican.workers.barbican.queue 2026-04-16 10:04:10.424654 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: barbican.workers_fanout_01a1594acb0a4e108fce6c1a24caae87 2026-04-16 10:04:10.454969 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: barbican.workers_fanout_1c9378ba6dd34d2faf560d3b20da1a5f 2026-04-16 10:04:10.512055 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: barbican.workers_fanout_ad5b21880f6d4748aba13eb5cdf2ce22 2026-04-16 10:04:10.556489 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: barbican_notifications.info 2026-04-16 10:04:10.596327 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central 2026-04-16 10:04:10.639387 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central.testbed-node-0 2026-04-16 10:04:10.676105 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central.testbed-node-1 2026-04-16 10:04:10.722925 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central.testbed-node-2 2026-04-16 10:04:10.759242 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central_fanout_571b4c16c5e1494c8fcfab249e922ed0 2026-04-16 10:04:10.806452 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central_fanout_822651294e5549c4a7d587da19ab6f30 2026-04-16 10:04:10.843209 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central_fanout_86ea759e747d4cee902d8bd08fee3b92 2026-04-16 10:04:10.881499 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central_fanout_88be1619eb3440ee80d4bc05a6ffb25a 2026-04-16 10:04:10.925267 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central_fanout_a69b7fa6dbba40b88522b311f1859f53 2026-04-16 10:04:10.973798 | orchestrator | 2026-04-16 10:04:10 | INFO  | Deleted queue: central_fanout_ac64ee17198742c7ab2bf332922a2deb 2026-04-16 10:04:11.080011 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-backup 2026-04-16 10:04:11.146114 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-backup.testbed-node-0 2026-04-16 10:04:11.189150 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-backup.testbed-node-1 2026-04-16 10:04:11.229928 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-backup.testbed-node-2 2026-04-16 10:04:11.267656 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-scheduler 2026-04-16 10:04:11.310542 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-scheduler.testbed-node-0 2026-04-16 10:04:11.357101 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-scheduler.testbed-node-1 2026-04-16 10:04:11.397972 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-scheduler.testbed-node-2 2026-04-16 10:04:11.435131 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume 2026-04-16 10:04:11.485523 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes 2026-04-16 10:04:11.547906 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 2026-04-16 10:04:11.588049 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes 2026-04-16 10:04:11.633125 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 2026-04-16 10:04:11.666279 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes 2026-04-16 10:04:11.710401 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 2026-04-16 10:04:11.753331 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: compute 2026-04-16 10:04:11.808154 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: compute.testbed-node-3 2026-04-16 10:04:11.875158 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: compute.testbed-node-4 2026-04-16 10:04:11.933578 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: compute.testbed-node-5 2026-04-16 10:04:11.972005 | orchestrator | 2026-04-16 10:04:11 | INFO  | Deleted queue: conductor 2026-04-16 10:04:12.021730 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: conductor.testbed-node-0 2026-04-16 10:04:12.076751 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: conductor.testbed-node-1 2026-04-16 10:04:12.137655 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: conductor.testbed-node-2 2026-04-16 10:04:12.185913 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: event.sample 2026-04-16 10:04:12.216031 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.11:55926 -> 192.168.16.10:5672 2026-04-16 10:04:12.232334 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.10:39060 -> 192.168.16.10:5672 2026-04-16 10:04:12.251900 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.11:48030 -> 192.168.16.10:5672 2026-04-16 10:04:12.273708 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.12:53246 -> 192.168.16.11:5672 2026-04-16 10:04:12.296238 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.10:39042 -> 192.168.16.10:5672 2026-04-16 10:04:12.313958 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.11:47922 -> 192.168.16.10:5672 2026-04-16 10:04:12.351349 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.10:39052 -> 192.168.16.10:5672 2026-04-16 10:04:12.365630 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.12:55242 -> 192.168.16.10:5672 2026-04-16 10:04:12.387257 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed connection: 192.168.16.12:53252 -> 192.168.16.11:5672 2026-04-16 10:04:12.387703 | orchestrator | 2026-04-16 10:04:12 | INFO  | Closed 9 connection(s) for queue: magnum-conductor 2026-04-16 10:04:12.418653 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor 2026-04-16 10:04:12.465236 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor.egjvi5e4un6c 2026-04-16 10:04:12.505709 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor.eyrmbnnnbzyv 2026-04-16 10:04:12.545445 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor.xdghpoj555ep 2026-04-16 10:04:12.590338 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_15aa7d26dabb429bbc34c1d5ea07ba13 2026-04-16 10:04:12.626745 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_4853ffdfb8914945815a21ec5936502f 2026-04-16 10:04:12.673739 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_61fd3efe2ab5476d96502b6dba978c04 2026-04-16 10:04:12.715511 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_64eace644ebd4769beed7b389a18cf01 2026-04-16 10:04:12.763293 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_893062836498442b9d984d31d217f88c 2026-04-16 10:04:12.805615 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_8c67448e4ce143caaf9be291bf1729b2 2026-04-16 10:04:12.845018 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_a08dee36b3e74510addd91c9642fbba5 2026-04-16 10:04:12.881784 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_d0ec7e6dd39d428d94bdf8ab9c061905 2026-04-16 10:04:12.920942 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: magnum-conductor_fanout_fb3659050193464d936ffb2a57fd1207 2026-04-16 10:04:12.959804 | orchestrator | 2026-04-16 10:04:12 | INFO  | Deleted queue: manila-data 2026-04-16 10:04:13.007865 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-data.testbed-node-0 2026-04-16 10:04:13.068898 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-data.testbed-node-1 2026-04-16 10:04:13.139157 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-data.testbed-node-2 2026-04-16 10:04:13.186222 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-scheduler 2026-04-16 10:04:13.244843 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-scheduler.testbed-node-0 2026-04-16 10:04:13.285952 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-scheduler.testbed-node-1 2026-04-16 10:04:13.361220 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-scheduler.testbed-node-2 2026-04-16 10:04:13.428044 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share 2026-04-16 10:04:13.466916 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share.testbed-node-0@cephfsnative1 2026-04-16 10:04:13.518575 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share.testbed-node-1@cephfsnative1 2026-04-16 10:04:13.563971 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share.testbed-node-2@cephfsnative1 2026-04-16 10:04:13.626800 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share_fanout_bc014dd0430d440da80eab884cbe35f2 2026-04-16 10:04:13.663899 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share_fanout_d0e66675da8949828cdf192119345c83 2026-04-16 10:04:13.713141 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: manila-share_fanout_e50f5a12b70f4b5282d4b61e530834ed 2026-04-16 10:04:13.858693 | orchestrator | 2026-04-16 10:04:13 | INFO  | Deleted queue: notifications.audit 2026-04-16 10:04:14.032248 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: notifications.critical 2026-04-16 10:04:14.213786 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: notifications.debug 2026-04-16 10:04:14.350168 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: notifications.error 2026-04-16 10:04:14.522617 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: notifications.info 2026-04-16 10:04:14.681543 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: notifications.sample 2026-04-16 10:04:14.901012 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: notifications.warn 2026-04-16 10:04:14.937738 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: octavia_provisioning_v2 2026-04-16 10:04:14.978801 | orchestrator | 2026-04-16 10:04:14 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-0 2026-04-16 10:04:15.033922 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-1 2026-04-16 10:04:15.074268 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: octavia_provisioning_v2.testbed-node-2 2026-04-16 10:04:15.120953 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer 2026-04-16 10:04:15.164014 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer.testbed-node-0 2026-04-16 10:04:15.210369 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer.testbed-node-1 2026-04-16 10:04:15.273607 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer.testbed-node-2 2026-04-16 10:04:15.313991 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer_fanout_033d5caf828c4c42a303069060fc965a 2026-04-16 10:04:15.346285 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer_fanout_09d0aa22620e4f1c835991367424e85f 2026-04-16 10:04:15.385826 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer_fanout_4a03b67430104baba894f0ac1546fd58 2026-04-16 10:04:15.419144 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer_fanout_66ea2503c56d4298901d4be64d5667eb 2026-04-16 10:04:15.456156 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer_fanout_834217f5d2eb4b54830711ad054eb938 2026-04-16 10:04:15.505259 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: producer_fanout_dbd8f88339e74fb3b2cbbc712c682b19 2026-04-16 10:04:15.556257 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-plugin 2026-04-16 10:04:15.596342 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-plugin.testbed-node-0 2026-04-16 10:04:15.640994 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-plugin.testbed-node-1 2026-04-16 10:04:15.685351 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-plugin.testbed-node-2 2026-04-16 10:04:15.730231 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-reports-plugin 2026-04-16 10:04:15.782694 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-reports-plugin.testbed-node-0 2026-04-16 10:04:15.823563 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-reports-plugin.testbed-node-1 2026-04-16 10:04:15.875505 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-reports-plugin.testbed-node-2 2026-04-16 10:04:15.925438 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-server-resource-versions 2026-04-16 10:04:15.981849 | orchestrator | 2026-04-16 10:04:15 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-0 2026-04-16 10:04:16.039947 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-1 2026-04-16 10:04:16.090851 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: q-server-resource-versions.testbed-node-2 2026-04-16 10:04:16.119831 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_5bbaa30339f04c5da005f0ec75a51bdd 2026-04-16 10:04:16.154004 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_66b42ce502994fed84e0f9f71dcdc866 2026-04-16 10:04:16.186704 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_723b6277b9d64a10af85734a9b50883d 2026-04-16 10:04:16.226993 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_79356e0a2833474e8c25c509bff7cdf1 2026-04-16 10:04:16.258230 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_9eb6a393ace44913b2460729edd636b3 2026-04-16 10:04:16.288277 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_b1f01ead0b924c52a5fb6730cdf69613 2026-04-16 10:04:16.325488 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_b5c3033668474ec09b4cb6ba46f46bbd 2026-04-16 10:04:16.365517 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_b87202d9c37b49a5b2cd65ed1f887dc0 2026-04-16 10:04:16.403790 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_cb001547a8ab4e26ae361f60fcc24efa 2026-04-16 10:04:16.435344 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: reply_e09d6d4b49414541986564190ed84971 2026-04-16 10:04:16.475143 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: scheduler 2026-04-16 10:04:16.516783 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: scheduler.testbed-node-0 2026-04-16 10:04:16.562321 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: scheduler.testbed-node-1 2026-04-16 10:04:16.622865 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: scheduler.testbed-node-2 2026-04-16 10:04:16.667755 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker 2026-04-16 10:04:16.713422 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker.testbed-node-0 2026-04-16 10:04:16.762949 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker.testbed-node-1 2026-04-16 10:04:16.825233 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker.testbed-node-2 2026-04-16 10:04:16.860262 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker_fanout_428c4df88e5349f1b6a84c9fe682f270 2026-04-16 10:04:16.890589 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker_fanout_7c3a992d2f3c4ddf9a25929c2a5eb177 2026-04-16 10:04:16.926691 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker_fanout_902a31640a7e43248db8a05145485a3b 2026-04-16 10:04:16.968140 | orchestrator | 2026-04-16 10:04:16 | INFO  | Deleted queue: worker_fanout_b34f9629ef144cf9934da7e17eb963df 2026-04-16 10:04:17.011779 | orchestrator | 2026-04-16 10:04:17 | INFO  | Deleted queue: worker_fanout_d32e450c72c243319e57e7ac7b58c42e 2026-04-16 10:04:17.065722 | orchestrator | 2026-04-16 10:04:17 | INFO  | Deleted queue: worker_fanout_e46323bd5ac14819960ca74faf4c4316 2026-04-16 10:04:17.065835 | orchestrator | 2026-04-16 10:04:17 | INFO  | Successfully deleted 127 queue(s) in vhost '/' 2026-04-16 10:04:17.280408 | orchestrator | + osism migrate rabbitmq3to4 list 2026-04-16 10:04:23.273932 | orchestrator | 2026-04-16 10:04:23 | ERROR  | Unable to get ansible vault password 2026-04-16 10:04:23.274140 | orchestrator | 2026-04-16 10:04:23 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 10:04:23.274165 | orchestrator | 2026-04-16 10:04:23 | ERROR  | Dropping encrypted entries 2026-04-16 10:04:23.306420 | orchestrator | 2026-04-16 10:04:23 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 10:04:23.501016 | orchestrator | 2026-04-16 10:04:23 | INFO  | Found 13 classic queue(s) in vhost '/': 2026-04-16 10:04:23.501117 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-04-16 10:04:23.501135 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor.egjvi5e4un6c (vhost: /, messages: 0) 2026-04-16 10:04:23.501149 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor.eyrmbnnnbzyv (vhost: /, messages: 0) 2026-04-16 10:04:23.501189 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor.xdghpoj555ep (vhost: /, messages: 0) 2026-04-16 10:04:23.501202 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_15aa7d26dabb429bbc34c1d5ea07ba13 (vhost: /, messages: 0) 2026-04-16 10:04:23.501216 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_4853ffdfb8914945815a21ec5936502f (vhost: /, messages: 0) 2026-04-16 10:04:23.501411 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_61fd3efe2ab5476d96502b6dba978c04 (vhost: /, messages: 0) 2026-04-16 10:04:23.501431 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_64eace644ebd4769beed7b389a18cf01 (vhost: /, messages: 0) 2026-04-16 10:04:23.501521 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_893062836498442b9d984d31d217f88c (vhost: /, messages: 0) 2026-04-16 10:04:23.503260 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_8c67448e4ce143caaf9be291bf1729b2 (vhost: /, messages: 0) 2026-04-16 10:04:23.503313 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_a08dee36b3e74510addd91c9642fbba5 (vhost: /, messages: 0) 2026-04-16 10:04:23.503325 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_d0ec7e6dd39d428d94bdf8ab9c061905 (vhost: /, messages: 0) 2026-04-16 10:04:23.503337 | orchestrator | 2026-04-16 10:04:23 | INFO  |  - magnum-conductor_fanout_fb3659050193464d936ffb2a57fd1207 (vhost: /, messages: 0) 2026-04-16 10:04:23.714607 | orchestrator | + osism migrate rabbitmq3to4 list --vhost openstack --quorum 2026-04-16 10:04:29.795071 | orchestrator | 2026-04-16 10:04:29 | ERROR  | Unable to get ansible vault password 2026-04-16 10:04:29.795179 | orchestrator | 2026-04-16 10:04:29 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 10:04:29.795196 | orchestrator | 2026-04-16 10:04:29 | ERROR  | Dropping encrypted entries 2026-04-16 10:04:29.826630 | orchestrator | 2026-04-16 10:04:29 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 10:04:30.011170 | orchestrator | 2026-04-16 10:04:30 | INFO  | Found 192 quorum queue(s) in vhost 'openstack': 2026-04-16 10:04:30.011285 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - alarm.all.sample (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011304 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - alarming.sample (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011320 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - barbican.workers (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011543 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - barbican.workers.barbican.queue (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011573 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - barbican.workers_fanout_testbed-node-0:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011589 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - barbican.workers_fanout_testbed-node-1:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011604 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - barbican.workers_fanout_testbed-node-2:barbican-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011613 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - barbican_notifications.info (vhost: openstack, messages: 0) 2026-04-16 10:04:30.011793 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012120 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012136 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012145 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012154 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central_fanout_testbed-node-0:designate-central:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012187 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central_fanout_testbed-node-0:designate-central:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012196 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central_fanout_testbed-node-1:designate-central:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012347 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central_fanout_testbed-node-1:designate-central:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012507 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central_fanout_testbed-node-2:designate-central:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012521 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - central_fanout_testbed-node-2:designate-central:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012880 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012896 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012904 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012912 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012921 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup_fanout_testbed-node-0:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.012929 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup_fanout_testbed-node-1:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013111 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-backup_fanout_testbed-node-2:cinder-backup:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013126 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013134 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013142 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013421 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013434 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler_fanout_testbed-node-0:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013441 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler_fanout_testbed-node-1:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013546 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-scheduler_fanout_testbed-node-2:cinder-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013738 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013751 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013758 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.013877 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_testbed-node-0:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014124 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014149 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014284 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_testbed-node-1:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014301 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014308 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014557 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_testbed-node-2:cinder-volume:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014571 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume_fanout_testbed-node-0:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014652 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume_fanout_testbed-node-1:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014831 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - cinder-volume_fanout_testbed-node-2:cinder-volume:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014850 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014940 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute.testbed-node-3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.014957 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute.testbed-node-4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015138 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute.testbed-node-5 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015152 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute_fanout_testbed-node-3:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015247 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute_fanout_testbed-node-4:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015258 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - compute_fanout_testbed-node-5:nova-compute:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015381 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015650 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015672 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015755 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015766 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015773 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor_fanout_testbed-node-0:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015910 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.015921 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor_fanout_testbed-node-1:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016061 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016193 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - conductor_fanout_testbed-node-2:nova-conductor:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016218 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - event.sample (vhost: openstack, messages: 7) 2026-04-16 10:04:30.016343 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016354 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016588 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016603 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016844 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data_fanout_testbed-node-0:manila-data:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016881 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data_fanout_testbed-node-1:manila-data:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016949 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-data_fanout_testbed-node-2:manila-data:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.016974 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017089 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017248 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017258 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017302 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler_fanout_testbed-node-0:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017396 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler_fanout_testbed-node-1:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017407 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-scheduler_fanout_testbed-node-2:manila-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017591 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017604 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017706 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017716 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017840 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share_fanout_testbed-node-0:manila-share:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017981 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share_fanout_testbed-node-1:manila-share:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.017991 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - manila-share_fanout_testbed-node-2:manila-share:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018120 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.audit (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018264 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.critical (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018284 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.debug (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018388 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.error (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018557 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.info (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018572 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.sample (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018756 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - notifications.warn (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018786 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018830 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018968 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.018977 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019094 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-0:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019181 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-1:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019229 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - octavia_provisioning_v2_fanout_testbed-node-2:octavia-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019378 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - osism-listener-cinder (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019386 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - osism-listener-glance (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019572 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - osism-listener-ironic (vhost: openstack, messages: 0) 2026-04-16 10:04:30.019619 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - osism-listener-keystone (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021149 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - osism-listener-neutron (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021168 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - osism-listener-nova (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021174 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021180 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021197 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021203 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021209 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021214 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer_fanout_testbed-node-0:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021220 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021226 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer_fanout_testbed-node-1:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021231 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021237 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - producer_fanout_testbed-node-2:designate-producer:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021252 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021257 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021263 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021268 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021274 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021280 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021285 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-0:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021291 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021296 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021302 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-1:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021307 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021312 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:5 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021324 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-plugin_fanout_testbed-node-2:neutron-server:6 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021376 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021383 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021389 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021395 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021544 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021571 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021671 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021757 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021771 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.021931 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-0:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022010 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022055 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022135 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022150 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022239 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022434 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-1:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022489 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022550 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:10 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022559 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:11 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022655 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:12 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022663 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022823 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-reports-plugin_fanout_testbed-node-2:neutron-server:3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022838 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022916 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.022976 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023168 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023178 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023275 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023290 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-0:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023298 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023407 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023511 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-1:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023569 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:7 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023699 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:8 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.023735 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - q-server-resource-versions_fanout_testbed-node-2:neutron-server:9 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024290 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-0:designate-manage:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024382 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-0:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024401 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-0:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024417 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-1:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024664 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-1:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024703 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-2:designate-producer:3 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024720 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-2:designate-producer:4 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024737 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-3:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024753 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-4:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024769 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - reply_testbed-node-5:nova-compute:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024785 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024801 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024829 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.024991 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025019 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025036 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler_fanout_testbed-node-0:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025052 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025081 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler_fanout_testbed-node-1:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025228 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025268 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - scheduler_fanout_testbed-node-2:nova-scheduler:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025286 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025438 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker.testbed-node-0 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025515 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker.testbed-node-1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025535 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker.testbed-node-2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025551 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025736 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker_fanout_testbed-node-0:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025765 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025783 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker_fanout_testbed-node-1:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025899 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:1 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.025926 | orchestrator | 2026-04-16 10:04:30 | INFO  |  - worker_fanout_testbed-node-2:designate-worker:2 (vhost: openstack, messages: 0) 2026-04-16 10:04:30.152995 | orchestrator | + osism migrate rabbitmq3to4 delete-exchanges 2026-04-16 10:04:35.399991 | orchestrator | 2026-04-16 10:04:35 | ERROR  | Unable to get ansible vault password 2026-04-16 10:04:35.400113 | orchestrator | 2026-04-16 10:04:35 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 10:04:35.400131 | orchestrator | 2026-04-16 10:04:35 | ERROR  | Dropping encrypted entries 2026-04-16 10:04:35.432215 | orchestrator | 2026-04-16 10:04:35 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 10:04:35.453829 | orchestrator | 2026-04-16 10:04:35 | INFO  | Found 27 exchange(s) in vhost '/' 2026-04-16 10:04:35.496626 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: aodh 2026-04-16 10:04:35.548376 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: ceilometer 2026-04-16 10:04:35.584538 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: cinder 2026-04-16 10:04:35.629965 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: designate 2026-04-16 10:04:35.663651 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: dns 2026-04-16 10:04:35.702530 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: glance 2026-04-16 10:04:35.753728 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: heat 2026-04-16 10:04:35.797628 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: ironic 2026-04-16 10:04:35.831255 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: keystone 2026-04-16 10:04:35.874687 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: l3_agent_fanout 2026-04-16 10:04:35.931882 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: magnum 2026-04-16 10:04:35.998707 | orchestrator | 2026-04-16 10:04:35 | INFO  | Deleted exchange: magnum-conductor_fanout 2026-04-16 10:04:36.045865 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: neutron 2026-04-16 10:04:36.081020 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: neutron-vo-Network-1.1_fanout 2026-04-16 10:04:36.115346 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: neutron-vo-Port-1.10_fanout 2026-04-16 10:04:36.154815 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: neutron-vo-SecurityGroup-1.6_fanout 2026-04-16 10:04:36.190822 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: neutron-vo-SecurityGroupRule-1.3_fanout 2026-04-16 10:04:36.226263 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: neutron-vo-Subnet-1.2_fanout 2026-04-16 10:04:36.263403 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: nova 2026-04-16 10:04:36.298661 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: octavia 2026-04-16 10:04:36.352198 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: openstack 2026-04-16 10:04:36.393037 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: q-agent-notifier-port-update_fanout 2026-04-16 10:04:36.429596 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: q-agent-notifier-security_group-update_fanout 2026-04-16 10:04:36.466937 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: scheduler_fanout 2026-04-16 10:04:36.509678 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: swift 2026-04-16 10:04:36.554215 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: trove 2026-04-16 10:04:36.594488 | orchestrator | 2026-04-16 10:04:36 | INFO  | Deleted exchange: zaqar 2026-04-16 10:04:36.594687 | orchestrator | 2026-04-16 10:04:36 | INFO  | Successfully deleted 27 exchange(s) in vhost '/' 2026-04-16 10:04:36.762228 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-04-16 10:04:42.551924 | orchestrator | 2026-04-16 10:04:42 | ERROR  | Unable to get ansible vault password 2026-04-16 10:04:42.552011 | orchestrator | 2026-04-16 10:04:42 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-04-16 10:04:42.552022 | orchestrator | 2026-04-16 10:04:42 | ERROR  | Dropping encrypted entries 2026-04-16 10:04:42.585380 | orchestrator | 2026-04-16 10:04:42 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-04-16 10:04:42.600672 | orchestrator | 2026-04-16 10:04:42 | INFO  | No exchanges found in vhost '/' 2026-04-16 10:04:42.811405 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-04-16 10:04:42.811605 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/400-monitoring.sh 2026-04-16 10:04:44.042141 | orchestrator | 2026-04-16 10:04:44 | INFO  | Prepare task for execution of prometheus. 2026-04-16 10:04:44.110677 | orchestrator | 2026-04-16 10:04:44 | INFO  | Task b6ff8ca0-81d1-4089-bc68-946efa7a6f92 (prometheus) was prepared for execution. 2026-04-16 10:04:44.110757 | orchestrator | 2026-04-16 10:04:44 | INFO  | It takes a moment until task b6ff8ca0-81d1-4089-bc68-946efa7a6f92 (prometheus) has been started and output is visible here. 2026-04-16 10:05:00.439934 | orchestrator | 2026-04-16 10:05:00.440042 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 10:05:00.440055 | orchestrator | 2026-04-16 10:05:00.440063 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 10:05:00.440070 | orchestrator | Thursday 16 April 2026 10:04:48 +0000 (0:00:01.423) 0:00:01.423 ******** 2026-04-16 10:05:00.440077 | orchestrator | ok: [testbed-manager] 2026-04-16 10:05:00.440085 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:05:00.440092 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:05:00.440099 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:05:00.440105 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:05:00.440112 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:05:00.440118 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:05:00.440124 | orchestrator | 2026-04-16 10:05:00.440131 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 10:05:00.440137 | orchestrator | Thursday 16 April 2026 10:04:51 +0000 (0:00:02.614) 0:00:04.038 ******** 2026-04-16 10:05:00.440145 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440152 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440159 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440166 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440172 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440179 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440208 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-04-16 10:05:00.440215 | orchestrator | 2026-04-16 10:05:00.440222 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-04-16 10:05:00.440228 | orchestrator | 2026-04-16 10:05:00.440235 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-16 10:05:00.440242 | orchestrator | Thursday 16 April 2026 10:04:55 +0000 (0:00:04.586) 0:00:08.624 ******** 2026-04-16 10:05:00.440250 | orchestrator | included: /ansible/roles/prometheus/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 10:05:00.440259 | orchestrator | 2026-04-16 10:05:00.440266 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-04-16 10:05:00.440273 | orchestrator | Thursday 16 April 2026 10:04:58 +0000 (0:00:02.539) 0:00:11.164 ******** 2026-04-16 10:05:00.440285 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 10:05:00.440296 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440316 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440341 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440349 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440362 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440368 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440375 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:00.440382 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:00.440388 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:00.440400 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:00.440413 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383090 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383187 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383199 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383210 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:01.383238 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:05:01.383250 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:01.383293 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383305 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:01.383314 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:01.383324 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383333 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383342 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383356 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383366 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:01.383386 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:07.782848 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:07.782934 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:07.782943 | orchestrator | 2026-04-16 10:05:07.782950 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-04-16 10:05:07.782962 | orchestrator | Thursday 16 April 2026 10:05:02 +0000 (0:00:04.088) 0:00:15.252 ******** 2026-04-16 10:05:07.782968 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-04-16 10:05:07.782974 | orchestrator | 2026-04-16 10:05:07.782979 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-04-16 10:05:07.782984 | orchestrator | Thursday 16 April 2026 10:05:05 +0000 (0:00:02.521) 0:00:17.774 ******** 2026-04-16 10:05:07.782991 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 10:05:07.783007 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783026 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783043 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783048 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783053 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783058 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783062 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:07.783070 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:07.783081 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:07.783086 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:07.783097 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157315 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157492 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:10.157523 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157545 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:10.157609 | orchestrator | ok: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157623 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:10.157640 | orchestrator | ok: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157684 | orchestrator | ok: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157705 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:10.157727 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:05:10.157746 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157782 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157801 | orchestrator | ok: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:10.157821 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:10.157854 | orchestrator | ok: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:12.823594 | orchestrator | ok: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:12.823689 | orchestrator | ok: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:12.823698 | orchestrator | 2026-04-16 10:05:12.823707 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-04-16 10:05:12.823715 | orchestrator | Thursday 16 April 2026 10:05:11 +0000 (0:00:06.324) 0:00:24.099 ******** 2026-04-16 10:05:12.823763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-16 10:05:12.823772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:12.823779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:12.823786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:12.823807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:12.823815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:12.823822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:12.823837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:12.823842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:12.823847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:12.823851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:12.823858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:13.467123 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:05:13.467323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:13.467375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:13.467399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:13.467421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:13.467503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:13.467526 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:05:13.467573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:13.467596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:13.467634 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:05:13.467655 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:05:13.467678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:13.467710 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:13.467733 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:05:13.467753 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:13.467774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:05:13.467795 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:05:13.467817 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:13.467853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:05:15.891330 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:05:15.891514 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:15.891537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:15.891551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:05:15.891564 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:05:15.891576 | orchestrator | 2026-04-16 10:05:15.891606 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-04-16 10:05:15.891620 | orchestrator | Thursday 16 April 2026 10:05:14 +0000 (0:00:03.320) 0:00:27.420 ******** 2026-04-16 10:05:15.891631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:15.891646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-16 10:05:15.891659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:15.891719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:15.891733 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:15.891744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:15.891761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:15.891774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:15.891786 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:15.891797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:15.891825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:16.581885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:16.582007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:16.582091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:16.582108 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:05:16.582121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:16.582134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:16.582146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:16.582206 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:05:16.582222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:05:16.582234 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:05:16.582245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:16.582257 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:05:16.582273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:16.582286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:16.582297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:05:16.582317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:16.582329 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:05:16.582349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:20.710187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:05:20.710309 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:05:20.710323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:05:20.710347 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:05:20.710354 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:05:20.710362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:05:20.710370 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:05:20.710377 | orchestrator | 2026-04-16 10:05:20.710384 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-04-16 10:05:20.710393 | orchestrator | Thursday 16 April 2026 10:05:17 +0000 (0:00:03.248) 0:00:30.668 ******** 2026-04-16 10:05:20.710423 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 10:05:20.710475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710535 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710550 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:05:20.710558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:20.710571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:22.698234 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698328 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698356 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:22.698400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:22.698422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:22.698510 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:05:22.698528 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698537 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:22.698564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:22.698583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:22.698599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:56.508403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:05:56.508659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:56.508728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:56.508752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:05:56.508774 | orchestrator | 2026-04-16 10:05:56.508797 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-04-16 10:05:56.508818 | orchestrator | Thursday 16 April 2026 10:05:24 +0000 (0:00:06.984) 0:00:37.652 ******** 2026-04-16 10:05:56.508839 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 10:05:56.508859 | orchestrator | 2026-04-16 10:05:56.508879 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-04-16 10:05:56.508896 | orchestrator | Thursday 16 April 2026 10:05:27 +0000 (0:00:02.167) 0:00:39.820 ******** 2026-04-16 10:05:56.508914 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:05:56.508931 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:05:56.508949 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:05:56.508967 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:05:56.508984 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:05:56.509001 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:05:56.509018 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:05:56.509037 | orchestrator | 2026-04-16 10:05:56.509055 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-04-16 10:05:56.509074 | orchestrator | Thursday 16 April 2026 10:05:29 +0000 (0:00:02.051) 0:00:41.871 ******** 2026-04-16 10:05:56.509094 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 10:05:56.509112 | orchestrator | 2026-04-16 10:05:56.509131 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-04-16 10:05:56.509146 | orchestrator | Thursday 16 April 2026 10:05:30 +0000 (0:00:01.691) 0:00:43.563 ******** 2026-04-16 10:05:56.509162 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509182 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509201 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.509219 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509238 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.509259 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 10:05:56.509278 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509297 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509315 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.509333 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509352 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.509371 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-04-16 10:05:56.509390 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509409 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509563 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.509587 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509600 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.509611 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 10:05:56.509622 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509633 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509647 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.509667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509685 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.509704 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-04-16 10:05:56.509722 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509741 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509760 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.509790 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509809 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.509827 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-04-16 10:05:56.509845 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509864 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509882 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.509902 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509920 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.509938 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-04-16 10:05:56.509956 | orchestrator | [WARNING]: Skipped 2026-04-16 10:05:56.509975 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.509993 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-04-16 10:05:56.510011 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-04-16 10:05:56.510097 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-04-16 10:05:56.510126 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-04-16 10:05:56.510143 | orchestrator | 2026-04-16 10:05:56.510159 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-04-16 10:05:56.510175 | orchestrator | Thursday 16 April 2026 10:05:34 +0000 (0:00:03.300) 0:00:46.864 ******** 2026-04-16 10:05:56.510190 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 10:05:56.510207 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:05:56.510222 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 10:05:56.510236 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:05:56.510250 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 10:05:56.510266 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:05:56.510283 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 10:05:56.510299 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:05:56.510316 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 10:05:56.510334 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:05:56.510350 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-04-16 10:05:56.510366 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:05:56.510382 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-04-16 10:05:56.510445 | orchestrator | 2026-04-16 10:05:56.510464 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-04-16 10:05:56.510480 | orchestrator | Thursday 16 April 2026 10:05:51 +0000 (0:00:17.458) 0:01:04.323 ******** 2026-04-16 10:05:56.510497 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 10:05:56.510513 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 10:05:56.510529 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:05:56.510546 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:05:56.510562 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 10:05:56.510578 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:05:56.510596 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 10:05:56.510613 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:05:56.510629 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 10:05:56.510645 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:05:56.510662 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-04-16 10:05:56.510678 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:05:56.510694 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-04-16 10:05:56.510710 | orchestrator | 2026-04-16 10:05:56.510726 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-04-16 10:05:56.510742 | orchestrator | Thursday 16 April 2026 10:05:55 +0000 (0:00:04.338) 0:01:08.661 ******** 2026-04-16 10:05:56.510777 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 10:06:36.240815 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.240967 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 10:06:36.240994 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.241007 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 10:06:36.241019 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.241030 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 10:06:36.241041 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.241067 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-04-16 10:06:36.241079 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 10:06:36.241091 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.241106 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-04-16 10:06:36.241125 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.241144 | orchestrator | 2026-04-16 10:06:36.241164 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-04-16 10:06:36.241185 | orchestrator | Thursday 16 April 2026 10:05:58 +0000 (0:00:02.808) 0:01:11.470 ******** 2026-04-16 10:06:36.241205 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 10:06:36.241223 | orchestrator | 2026-04-16 10:06:36.241242 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-04-16 10:06:36.241262 | orchestrator | Thursday 16 April 2026 10:06:00 +0000 (0:00:01.731) 0:01:13.201 ******** 2026-04-16 10:06:36.241281 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:06:36.241339 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.241362 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.241381 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.241435 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.241449 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.241461 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.241474 | orchestrator | 2026-04-16 10:06:36.241487 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-04-16 10:06:36.241499 | orchestrator | Thursday 16 April 2026 10:06:02 +0000 (0:00:01.838) 0:01:15.040 ******** 2026-04-16 10:06:36.241511 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:06:36.241524 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.241540 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.241560 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.241579 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:06:36.241600 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:06:36.241619 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:06:36.241638 | orchestrator | 2026-04-16 10:06:36.241657 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-04-16 10:06:36.241677 | orchestrator | Thursday 16 April 2026 10:06:05 +0000 (0:00:03.276) 0:01:18.317 ******** 2026-04-16 10:06:36.241694 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241716 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241734 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:06:36.241752 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.241769 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241788 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.241807 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241819 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.241830 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241841 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.241852 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241862 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.241873 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-04-16 10:06:36.241884 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.241894 | orchestrator | 2026-04-16 10:06:36.241905 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-04-16 10:06:36.241916 | orchestrator | Thursday 16 April 2026 10:06:08 +0000 (0:00:02.744) 0:01:21.062 ******** 2026-04-16 10:06:36.241926 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 10:06:36.241937 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.241948 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 10:06:36.241959 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.241975 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 10:06:36.241993 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.242011 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 10:06:36.242118 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.242167 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-04-16 10:06:36.242181 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 10:06:36.242210 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.242221 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-04-16 10:06:36.242232 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.242243 | orchestrator | 2026-04-16 10:06:36.242254 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-04-16 10:06:36.242265 | orchestrator | Thursday 16 April 2026 10:06:11 +0000 (0:00:02.749) 0:01:23.811 ******** 2026-04-16 10:06:36.242276 | orchestrator | [WARNING]: Skipped 2026-04-16 10:06:36.242296 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-04-16 10:06:36.242307 | orchestrator | due to this access issue: 2026-04-16 10:06:36.242324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-04-16 10:06:36.242342 | orchestrator | not a directory 2026-04-16 10:06:36.242360 | orchestrator | ok: [testbed-manager -> localhost] 2026-04-16 10:06:36.242378 | orchestrator | 2026-04-16 10:06:36.242395 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-04-16 10:06:36.242443 | orchestrator | Thursday 16 April 2026 10:06:13 +0000 (0:00:02.146) 0:01:25.957 ******** 2026-04-16 10:06:36.242463 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:06:36.242481 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.242499 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.242511 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.242521 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.242532 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.242543 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.242553 | orchestrator | 2026-04-16 10:06:36.242564 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-04-16 10:06:36.242575 | orchestrator | Thursday 16 April 2026 10:06:15 +0000 (0:00:01.972) 0:01:27.930 ******** 2026-04-16 10:06:36.242586 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:06:36.242597 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.242607 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.242618 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.242629 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.242639 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.242650 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.242660 | orchestrator | 2026-04-16 10:06:36.242671 | orchestrator | TASK [prometheus : Check for the existence of Prometheus v2 container volume] *** 2026-04-16 10:06:36.242682 | orchestrator | Thursday 16 April 2026 10:06:17 +0000 (0:00:02.459) 0:01:30.389 ******** 2026-04-16 10:06:36.242693 | orchestrator | ok: [testbed-manager] 2026-04-16 10:06:36.242704 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:06:36.242714 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:06:36.242725 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:06:36.242736 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:06:36.242746 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:06:36.242757 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:06:36.242767 | orchestrator | 2026-04-16 10:06:36.242778 | orchestrator | TASK [prometheus : Gracefully stop Prometheus] ********************************* 2026-04-16 10:06:36.242789 | orchestrator | Thursday 16 April 2026 10:06:20 +0000 (0:00:02.430) 0:01:32.820 ******** 2026-04-16 10:06:36.242800 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.242810 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.242821 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.242832 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.242843 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.242853 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.242864 | orchestrator | changed: [testbed-manager] 2026-04-16 10:06:36.242875 | orchestrator | 2026-04-16 10:06:36.242885 | orchestrator | TASK [prometheus : Create new Prometheus v3 volume] **************************** 2026-04-16 10:06:36.242907 | orchestrator | Thursday 16 April 2026 10:06:28 +0000 (0:00:08.132) 0:01:40.953 ******** 2026-04-16 10:06:36.242917 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.242928 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.242939 | orchestrator | changed: [testbed-manager] 2026-04-16 10:06:36.242950 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.242960 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.242971 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.242982 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.242993 | orchestrator | 2026-04-16 10:06:36.243003 | orchestrator | TASK [prometheus : Move _data from old to new volume] ************************** 2026-04-16 10:06:36.243014 | orchestrator | Thursday 16 April 2026 10:06:30 +0000 (0:00:02.199) 0:01:43.153 ******** 2026-04-16 10:06:36.243025 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.243036 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.243046 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.243057 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.243068 | orchestrator | changed: [testbed-manager] 2026-04-16 10:06:36.243079 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.243089 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.243100 | orchestrator | 2026-04-16 10:06:36.243110 | orchestrator | TASK [prometheus : Remove old Prometheus v2 volume] **************************** 2026-04-16 10:06:36.243121 | orchestrator | Thursday 16 April 2026 10:06:32 +0000 (0:00:02.101) 0:01:45.254 ******** 2026-04-16 10:06:36.243132 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:36.243143 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:36.243153 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:36.243164 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:36.243174 | orchestrator | changed: [testbed-manager] 2026-04-16 10:06:36.243185 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:36.243196 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:36.243206 | orchestrator | 2026-04-16 10:06:36.243217 | orchestrator | TASK [service-check-containers : prometheus | Check containers] **************** 2026-04-16 10:06:36.243228 | orchestrator | Thursday 16 April 2026 10:06:34 +0000 (0:00:02.434) 0:01:47.689 ******** 2026-04-16 10:06:36.243264 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-04-16 10:06:37.957996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-04-16 10:06:37.958340 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:37.958354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:37.958374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:37.958387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:37.958451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:37.958467 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:06:37.958487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:37.958509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.164923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:06:44.165030 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165057 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:06:44.165067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-04-16 10:06:44.165102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:44.165163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:44.165174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-04-16 10:06:44.165185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-04-16 10:06:44.165216 | orchestrator | 2026-04-16 10:06:44.165227 | orchestrator | TASK [service-check-containers : prometheus | Notify handlers to restart containers] *** 2026-04-16 10:06:44.165239 | orchestrator | Thursday 16 April 2026 10:06:41 +0000 (0:00:06.554) 0:01:54.243 ******** 2026-04-16 10:06:44.165249 | orchestrator | changed: [testbed-manager] => { 2026-04-16 10:06:44.165259 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165269 | orchestrator | } 2026-04-16 10:06:44.165278 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 10:06:44.165304 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165328 | orchestrator | } 2026-04-16 10:06:44.165339 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 10:06:44.165349 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165360 | orchestrator | } 2026-04-16 10:06:44.165368 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 10:06:44.165378 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165387 | orchestrator | } 2026-04-16 10:06:44.165474 | orchestrator | changed: [testbed-node-3] => { 2026-04-16 10:06:44.165486 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165496 | orchestrator | } 2026-04-16 10:06:44.165504 | orchestrator | changed: [testbed-node-4] => { 2026-04-16 10:06:44.165514 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165523 | orchestrator | } 2026-04-16 10:06:44.165532 | orchestrator | changed: [testbed-node-5] => { 2026-04-16 10:06:44.165541 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:06:44.165550 | orchestrator | } 2026-04-16 10:06:44.165558 | orchestrator | 2026-04-16 10:06:44.165568 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 10:06:44.165578 | orchestrator | Thursday 16 April 2026 10:06:43 +0000 (0:00:02.120) 0:01:56.363 ******** 2026-04-16 10:06:44.165603 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-04-16 10:06:44.451546 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:44.451656 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:44.451673 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:06:44.451721 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'cap_add': ['CAP_NET_RAW'], 'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:44.451735 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:06:44.451747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:44.451759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:44.451789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:44.451801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:44.451811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:44.451821 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:06:44.451838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:44.451853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:44.451864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:44.451874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:44.451890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:47.839357 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:06:47.839515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:47.839534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:47.839546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:47.839582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-04-16 10:06:47.839618 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:06:47.839629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:47.839639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839681 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:06:47.839691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:47.839701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839729 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:06:47.839744 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-04-16 10:06:47.839754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-04-16 10:06:47.839774 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:06:47.839784 | orchestrator | 2026-04-16 10:06:47.839794 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:06:47.839806 | orchestrator | Thursday 16 April 2026 10:06:46 +0000 (0:00:03.266) 0:01:59.630 ******** 2026-04-16 10:06:47.839815 | orchestrator | 2026-04-16 10:06:47.839825 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:06:47.839835 | orchestrator | Thursday 16 April 2026 10:06:47 +0000 (0:00:00.456) 0:02:00.086 ******** 2026-04-16 10:06:47.839844 | orchestrator | 2026-04-16 10:06:47.839854 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:06:47.839869 | orchestrator | Thursday 16 April 2026 10:06:47 +0000 (0:00:00.441) 0:02:00.527 ******** 2026-04-16 10:09:05.535600 | orchestrator | 2026-04-16 10:09:05.535710 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:09:05.535725 | orchestrator | Thursday 16 April 2026 10:06:48 +0000 (0:00:00.469) 0:02:00.996 ******** 2026-04-16 10:09:05.535736 | orchestrator | 2026-04-16 10:09:05.535746 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:09:05.535780 | orchestrator | Thursday 16 April 2026 10:06:48 +0000 (0:00:00.690) 0:02:01.687 ******** 2026-04-16 10:09:05.535790 | orchestrator | 2026-04-16 10:09:05.535800 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:09:05.535810 | orchestrator | Thursday 16 April 2026 10:06:49 +0000 (0:00:00.424) 0:02:02.112 ******** 2026-04-16 10:09:05.535820 | orchestrator | 2026-04-16 10:09:05.535830 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-04-16 10:09:05.535840 | orchestrator | Thursday 16 April 2026 10:06:49 +0000 (0:00:00.434) 0:02:02.546 ******** 2026-04-16 10:09:05.535849 | orchestrator | 2026-04-16 10:09:05.535859 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-04-16 10:09:05.535869 | orchestrator | Thursday 16 April 2026 10:06:50 +0000 (0:00:00.797) 0:02:03.344 ******** 2026-04-16 10:09:05.535879 | orchestrator | changed: [testbed-manager] 2026-04-16 10:09:05.535889 | orchestrator | 2026-04-16 10:09:05.535899 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-04-16 10:09:05.535908 | orchestrator | Thursday 16 April 2026 10:07:13 +0000 (0:00:22.881) 0:02:26.225 ******** 2026-04-16 10:09:05.535918 | orchestrator | changed: [testbed-node-3] 2026-04-16 10:09:05.535928 | orchestrator | changed: [testbed-manager] 2026-04-16 10:09:05.535937 | orchestrator | changed: [testbed-node-4] 2026-04-16 10:09:05.535947 | orchestrator | changed: [testbed-node-5] 2026-04-16 10:09:05.535956 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:09:05.535966 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:09:05.535975 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:09:05.535985 | orchestrator | 2026-04-16 10:09:05.535994 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-04-16 10:09:05.536004 | orchestrator | Thursday 16 April 2026 10:07:30 +0000 (0:00:17.313) 0:02:43.539 ******** 2026-04-16 10:09:05.536014 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:09:05.536023 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:09:05.536033 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:09:05.536043 | orchestrator | 2026-04-16 10:09:05.536060 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-04-16 10:09:05.536079 | orchestrator | Thursday 16 April 2026 10:07:43 +0000 (0:00:12.917) 0:02:56.456 ******** 2026-04-16 10:09:05.536095 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:09:05.536112 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:09:05.536130 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:09:05.536148 | orchestrator | 2026-04-16 10:09:05.536166 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-04-16 10:09:05.536185 | orchestrator | Thursday 16 April 2026 10:07:56 +0000 (0:00:12.919) 0:03:09.376 ******** 2026-04-16 10:09:05.536203 | orchestrator | changed: [testbed-node-3] 2026-04-16 10:09:05.536220 | orchestrator | changed: [testbed-manager] 2026-04-16 10:09:05.536236 | orchestrator | changed: [testbed-node-4] 2026-04-16 10:09:05.536253 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:09:05.536270 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:09:05.536304 | orchestrator | changed: [testbed-node-5] 2026-04-16 10:09:05.536324 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:09:05.536368 | orchestrator | 2026-04-16 10:09:05.536385 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-04-16 10:09:05.536403 | orchestrator | Thursday 16 April 2026 10:08:13 +0000 (0:00:16.466) 0:03:25.842 ******** 2026-04-16 10:09:05.536420 | orchestrator | changed: [testbed-manager] 2026-04-16 10:09:05.536432 | orchestrator | 2026-04-16 10:09:05.536444 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-04-16 10:09:05.536455 | orchestrator | Thursday 16 April 2026 10:08:27 +0000 (0:00:14.476) 0:03:40.319 ******** 2026-04-16 10:09:05.536466 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:09:05.536477 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:09:05.536489 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:09:05.536498 | orchestrator | 2026-04-16 10:09:05.536508 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-04-16 10:09:05.536531 | orchestrator | Thursday 16 April 2026 10:08:40 +0000 (0:00:12.779) 0:03:53.098 ******** 2026-04-16 10:09:05.536541 | orchestrator | changed: [testbed-manager] 2026-04-16 10:09:05.536550 | orchestrator | 2026-04-16 10:09:05.536560 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-04-16 10:09:05.536569 | orchestrator | Thursday 16 April 2026 10:08:52 +0000 (0:00:12.046) 0:04:05.144 ******** 2026-04-16 10:09:05.536579 | orchestrator | changed: [testbed-node-3] 2026-04-16 10:09:05.536589 | orchestrator | changed: [testbed-node-4] 2026-04-16 10:09:05.536598 | orchestrator | changed: [testbed-node-5] 2026-04-16 10:09:05.536608 | orchestrator | 2026-04-16 10:09:05.536617 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:09:05.536628 | orchestrator | testbed-manager : ok=28  changed=14  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 10:09:05.536640 | orchestrator | testbed-node-0 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-16 10:09:05.536649 | orchestrator | testbed-node-1 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-16 10:09:05.536659 | orchestrator | testbed-node-2 : ok=17  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2026-04-16 10:09:05.536689 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 10:09:05.536699 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 10:09:05.536709 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 10:09:05.536719 | orchestrator | 2026-04-16 10:09:05.536729 | orchestrator | 2026-04-16 10:09:05.536738 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:09:05.536748 | orchestrator | Thursday 16 April 2026 10:09:05 +0000 (0:00:12.674) 0:04:17.818 ******** 2026-04-16 10:09:05.536758 | orchestrator | =============================================================================== 2026-04-16 10:09:05.536768 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.88s 2026-04-16 10:09:05.536778 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.46s 2026-04-16 10:09:05.536787 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.31s 2026-04-16 10:09:05.536797 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.47s 2026-04-16 10:09:05.536806 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 14.48s 2026-04-16 10:09:05.536816 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.92s 2026-04-16 10:09:05.536826 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.92s 2026-04-16 10:09:05.536835 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.78s 2026-04-16 10:09:05.536845 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.67s 2026-04-16 10:09:05.536855 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 12.05s 2026-04-16 10:09:05.536864 | orchestrator | prometheus : Gracefully stop Prometheus --------------------------------- 8.13s 2026-04-16 10:09:05.536874 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.98s 2026-04-16 10:09:05.536884 | orchestrator | service-check-containers : prometheus | Check containers ---------------- 6.55s 2026-04-16 10:09:05.536893 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.32s 2026-04-16 10:09:05.536909 | orchestrator | Group hosts based on enabled services ----------------------------------- 4.59s 2026-04-16 10:09:05.536919 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.34s 2026-04-16 10:09:05.536928 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.09s 2026-04-16 10:09:05.536938 | orchestrator | prometheus : Flush handlers --------------------------------------------- 3.71s 2026-04-16 10:09:05.536948 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 3.32s 2026-04-16 10:09:05.536957 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.30s 2026-04-16 10:09:07.011721 | orchestrator | 2026-04-16 10:09:07 | INFO  | Prepare task for execution of grafana. 2026-04-16 10:09:07.080179 | orchestrator | 2026-04-16 10:09:07 | INFO  | Task 7506b419-eb8d-404e-b3db-50f68bf24f86 (grafana) was prepared for execution. 2026-04-16 10:09:07.080249 | orchestrator | 2026-04-16 10:09:07 | INFO  | It takes a moment until task 7506b419-eb8d-404e-b3db-50f68bf24f86 (grafana) has been started and output is visible here. 2026-04-16 10:09:29.388873 | orchestrator | 2026-04-16 10:09:29.389012 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 10:09:29.389034 | orchestrator | 2026-04-16 10:09:29.389046 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 10:09:29.389057 | orchestrator | Thursday 16 April 2026 10:09:12 +0000 (0:00:01.768) 0:00:01.768 ******** 2026-04-16 10:09:29.389068 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:09:29.389079 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:09:29.389089 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:09:29.389099 | orchestrator | 2026-04-16 10:09:29.389109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 10:09:29.389119 | orchestrator | Thursday 16 April 2026 10:09:13 +0000 (0:00:01.667) 0:00:03.436 ******** 2026-04-16 10:09:29.389129 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-04-16 10:09:29.389140 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-04-16 10:09:29.389150 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-04-16 10:09:29.389159 | orchestrator | 2026-04-16 10:09:29.389169 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-04-16 10:09:29.389179 | orchestrator | 2026-04-16 10:09:29.389189 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-16 10:09:29.389199 | orchestrator | Thursday 16 April 2026 10:09:15 +0000 (0:00:01.759) 0:00:05.195 ******** 2026-04-16 10:09:29.389211 | orchestrator | included: /ansible/roles/grafana/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:09:29.389222 | orchestrator | 2026-04-16 10:09:29.389232 | orchestrator | TASK [grafana : Checking if Grafana container needs upgrading] ***************** 2026-04-16 10:09:29.389242 | orchestrator | Thursday 16 April 2026 10:09:18 +0000 (0:00:03.134) 0:00:08.330 ******** 2026-04-16 10:09:29.389252 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:09:29.389261 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:09:29.389271 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:09:29.389281 | orchestrator | 2026-04-16 10:09:29.389291 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-04-16 10:09:29.389301 | orchestrator | Thursday 16 April 2026 10:09:21 +0000 (0:00:02.681) 0:00:11.011 ******** 2026-04-16 10:09:29.389314 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:29.389390 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:29.389419 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:29.389430 | orchestrator | 2026-04-16 10:09:29.389442 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-04-16 10:09:29.389453 | orchestrator | Thursday 16 April 2026 10:09:22 +0000 (0:00:01.634) 0:00:12.646 ******** 2026-04-16 10:09:29.389464 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 10:09:29.389476 | orchestrator | 2026-04-16 10:09:29.389487 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-04-16 10:09:29.389515 | orchestrator | Thursday 16 April 2026 10:09:25 +0000 (0:00:02.084) 0:00:14.730 ******** 2026-04-16 10:09:29.389527 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:09:29.389539 | orchestrator | 2026-04-16 10:09:29.389550 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-04-16 10:09:29.389561 | orchestrator | Thursday 16 April 2026 10:09:26 +0000 (0:00:01.786) 0:00:16.516 ******** 2026-04-16 10:09:29.389572 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:29.389584 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:29.389604 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:29.389617 | orchestrator | 2026-04-16 10:09:29.389629 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-04-16 10:09:29.389640 | orchestrator | Thursday 16 April 2026 10:09:29 +0000 (0:00:02.296) 0:00:18.812 ******** 2026-04-16 10:09:29.389651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:09:29.389662 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:09:29.389686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:09:36.130427 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:09:36.130566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:09:36.130600 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:09:36.130622 | orchestrator | 2026-04-16 10:09:36.130643 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-04-16 10:09:36.130666 | orchestrator | Thursday 16 April 2026 10:09:30 +0000 (0:00:01.491) 0:00:20.304 ******** 2026-04-16 10:09:36.130725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:09:36.130749 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:09:36.130770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:09:36.130789 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:09:36.130824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:09:36.130843 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:09:36.130864 | orchestrator | 2026-04-16 10:09:36.130882 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-04-16 10:09:36.130901 | orchestrator | Thursday 16 April 2026 10:09:32 +0000 (0:00:01.736) 0:00:22.041 ******** 2026-04-16 10:09:36.130951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:36.130977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:36.131015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:36.131036 | orchestrator | 2026-04-16 10:09:36.131057 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-04-16 10:09:36.131076 | orchestrator | Thursday 16 April 2026 10:09:34 +0000 (0:00:02.312) 0:00:24.353 ******** 2026-04-16 10:09:36.131099 | orchestrator | ok: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:36.131130 | orchestrator | ok: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:09:36.131244 | orchestrator | ok: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:10:02.091975 | orchestrator | 2026-04-16 10:10:02.092075 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-04-16 10:10:02.092089 | orchestrator | Thursday 16 April 2026 10:09:37 +0000 (0:00:02.537) 0:00:26.891 ******** 2026-04-16 10:10:02.092123 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:10:02.092134 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:10:02.092143 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:10:02.092152 | orchestrator | 2026-04-16 10:10:02.092162 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-04-16 10:10:02.092171 | orchestrator | Thursday 16 April 2026 10:09:38 +0000 (0:00:01.365) 0:00:28.257 ******** 2026-04-16 10:10:02.092180 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-16 10:10:02.092190 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-16 10:10:02.092199 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-04-16 10:10:02.092208 | orchestrator | 2026-04-16 10:10:02.092216 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-04-16 10:10:02.092225 | orchestrator | Thursday 16 April 2026 10:09:40 +0000 (0:00:02.193) 0:00:30.450 ******** 2026-04-16 10:10:02.092234 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-16 10:10:02.092244 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-16 10:10:02.092253 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-04-16 10:10:02.092261 | orchestrator | 2026-04-16 10:10:02.092270 | orchestrator | TASK [grafana : Check if the folder for custom grafana dashboards exists] ****** 2026-04-16 10:10:02.092279 | orchestrator | Thursday 16 April 2026 10:09:42 +0000 (0:00:02.201) 0:00:32.652 ******** 2026-04-16 10:10:02.092294 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 10:10:02.092308 | orchestrator | 2026-04-16 10:10:02.092322 | orchestrator | TASK [grafana : Remove templated Grafana dashboards] *************************** 2026-04-16 10:10:02.092336 | orchestrator | Thursday 16 April 2026 10:09:44 +0000 (0:00:01.728) 0:00:34.380 ******** 2026-04-16 10:10:02.092350 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:10:02.092363 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:10:02.092379 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:10:02.092393 | orchestrator | 2026-04-16 10:10:02.092410 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-04-16 10:10:02.092426 | orchestrator | Thursday 16 April 2026 10:09:46 +0000 (0:00:02.002) 0:00:36.382 ******** 2026-04-16 10:10:02.092442 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:10:02.092507 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:10:02.092518 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:10:02.092529 | orchestrator | 2026-04-16 10:10:02.092539 | orchestrator | TASK [service-check-containers : grafana | Check containers] ******************* 2026-04-16 10:10:02.092548 | orchestrator | Thursday 16 April 2026 10:09:49 +0000 (0:00:02.660) 0:00:39.043 ******** 2026-04-16 10:10:02.092561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:10:02.092589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:10:02.092632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:10:02.092654 | orchestrator | 2026-04-16 10:10:02.092677 | orchestrator | TASK [service-check-containers : grafana | Notify handlers to restart containers] *** 2026-04-16 10:10:02.092692 | orchestrator | Thursday 16 April 2026 10:09:51 +0000 (0:00:02.235) 0:00:41.279 ******** 2026-04-16 10:10:02.092708 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 10:10:02.092724 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:10:02.092739 | orchestrator | } 2026-04-16 10:10:02.092755 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 10:10:02.092771 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:10:02.092786 | orchestrator | } 2026-04-16 10:10:02.092803 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 10:10:02.092820 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:10:02.092835 | orchestrator | } 2026-04-16 10:10:02.092846 | orchestrator | 2026-04-16 10:10:02.092856 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 10:10:02.092866 | orchestrator | Thursday 16 April 2026 10:09:52 +0000 (0:00:01.339) 0:00:42.618 ******** 2026-04-16 10:10:02.092878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:10:02.092890 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:10:02.092901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:10:02.092918 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:10:02.092934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:10:02.092943 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:10:02.092951 | orchestrator | 2026-04-16 10:10:02.092960 | orchestrator | TASK [grafana : Stopping all Grafana instances but the first node] ************* 2026-04-16 10:10:02.092969 | orchestrator | Thursday 16 April 2026 10:09:54 +0000 (0:00:01.419) 0:00:44.038 ******** 2026-04-16 10:10:02.092977 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:10:02.092986 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:10:02.092994 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:10:02.093003 | orchestrator | 2026-04-16 10:10:02.093012 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-16 10:10:02.093020 | orchestrator | Thursday 16 April 2026 10:10:01 +0000 (0:00:07.097) 0:00:51.136 ******** 2026-04-16 10:10:02.093029 | orchestrator | 2026-04-16 10:10:02.093037 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-16 10:10:02.093046 | orchestrator | Thursday 16 April 2026 10:10:01 +0000 (0:00:00.424) 0:00:51.560 ******** 2026-04-16 10:10:02.093055 | orchestrator | 2026-04-16 10:10:02.093072 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-04-16 10:11:45.400680 | orchestrator | Thursday 16 April 2026 10:10:02 +0000 (0:00:00.579) 0:00:52.139 ******** 2026-04-16 10:11:45.400830 | orchestrator | 2026-04-16 10:11:45.400851 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-04-16 10:11:45.400863 | orchestrator | Thursday 16 April 2026 10:10:03 +0000 (0:00:00.784) 0:00:52.924 ******** 2026-04-16 10:11:45.400876 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:11:45.400888 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:11:45.400900 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:11:45.400960 | orchestrator | 2026-04-16 10:11:45.400972 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-04-16 10:11:45.400984 | orchestrator | Thursday 16 April 2026 10:10:41 +0000 (0:00:38.453) 0:01:31.378 ******** 2026-04-16 10:11:45.400995 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:11:45.401006 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:11:45.401017 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-04-16 10:11:45.401030 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-04-16 10:11:45.401041 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:11:45.401053 | orchestrator | 2026-04-16 10:11:45.401064 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-04-16 10:11:45.401075 | orchestrator | Thursday 16 April 2026 10:11:09 +0000 (0:00:27.740) 0:01:59.119 ******** 2026-04-16 10:11:45.401086 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:11:45.401097 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:11:45.401107 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:11:45.401118 | orchestrator | 2026-04-16 10:11:45.401129 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:11:45.401141 | orchestrator | testbed-node-0 : ok=19  changed=6  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 10:11:45.401184 | orchestrator | testbed-node-1 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 10:11:45.401196 | orchestrator | testbed-node-2 : ok=17  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 10:11:45.401209 | orchestrator | 2026-04-16 10:11:45.401221 | orchestrator | 2026-04-16 10:11:45.401233 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:11:45.401246 | orchestrator | Thursday 16 April 2026 10:11:45 +0000 (0:00:35.707) 0:02:34.827 ******** 2026-04-16 10:11:45.401258 | orchestrator | =============================================================================== 2026-04-16 10:11:45.401271 | orchestrator | grafana : Restart first grafana container ------------------------------ 38.45s 2026-04-16 10:11:45.401283 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 35.71s 2026-04-16 10:11:45.401293 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.74s 2026-04-16 10:11:45.401304 | orchestrator | grafana : Stopping all Grafana instances but the first node ------------- 7.10s 2026-04-16 10:11:45.401315 | orchestrator | grafana : include_tasks ------------------------------------------------- 3.13s 2026-04-16 10:11:45.401326 | orchestrator | grafana : Checking if Grafana container needs upgrading ----------------- 2.68s 2026-04-16 10:11:45.401337 | orchestrator | grafana : Copying over custom dashboards -------------------------------- 2.66s 2026-04-16 10:11:45.401348 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 2.54s 2026-04-16 10:11:45.401358 | orchestrator | grafana : Copying over config.json files -------------------------------- 2.31s 2026-04-16 10:11:45.401369 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 2.30s 2026-04-16 10:11:45.401379 | orchestrator | service-check-containers : grafana | Check containers ------------------- 2.24s 2026-04-16 10:11:45.401390 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 2.20s 2026-04-16 10:11:45.401401 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 2.19s 2026-04-16 10:11:45.401412 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 2.08s 2026-04-16 10:11:45.401437 | orchestrator | grafana : Remove templated Grafana dashboards --------------------------- 2.00s 2026-04-16 10:11:45.401449 | orchestrator | grafana : Flush handlers ------------------------------------------------ 1.79s 2026-04-16 10:11:45.401459 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.79s 2026-04-16 10:11:45.401470 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.76s 2026-04-16 10:11:45.401481 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.74s 2026-04-16 10:11:45.401491 | orchestrator | grafana : Check if the folder for custom grafana dashboards exists ------ 1.73s 2026-04-16 10:11:45.566319 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/510-clusterapi.sh 2026-04-16 10:11:45.574362 | orchestrator | + set -e 2026-04-16 10:11:45.574442 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 10:11:45.574457 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 10:11:45.574470 | orchestrator | ++ INTERACTIVE=false 2026-04-16 10:11:45.574481 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 10:11:45.574491 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 10:11:45.574502 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 10:11:45.575552 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 10:11:45.581510 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-16 10:11:45.581581 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-16 10:11:45.581759 | orchestrator | ++ semver 10.0.0 8.0.0 2026-04-16 10:11:45.637996 | orchestrator | + [[ 1 -ge 0 ]] 2026-04-16 10:11:45.638205 | orchestrator | + osism apply clusterapi 2026-04-16 10:11:46.895224 | orchestrator | 2026-04-16 10:11:46 | INFO  | Prepare task for execution of clusterapi. 2026-04-16 10:11:46.959269 | orchestrator | 2026-04-16 10:11:46 | INFO  | Task 3684e979-c27f-45d6-97ec-6ab3bd4be309 (clusterapi) was prepared for execution. 2026-04-16 10:11:46.959395 | orchestrator | 2026-04-16 10:11:46 | INFO  | It takes a moment until task 3684e979-c27f-45d6-97ec-6ab3bd4be309 (clusterapi) has been started and output is visible here. 2026-04-16 10:12:33.517706 | orchestrator | 2026-04-16 10:12:33.517825 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-04-16 10:12:33.517844 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-04-16 10:12:33.517857 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-04-16 10:12:33.517880 | orchestrator | 2026-04-16 10:12:33.517892 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-04-16 10:12:33.517903 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-04-16 10:12:33.517914 | orchestrator | (): 'NoneType' object is not subscriptable 2026-04-16 10:12:33.517936 | orchestrator | Thursday 16 April 2026 10:11:51 +0000 (0:00:01.129) 0:00:01.129 ******** 2026-04-16 10:12:33.517948 | orchestrator | included: cert_manager for testbed-manager 2026-04-16 10:12:33.517960 | orchestrator | 2026-04-16 10:12:33.517971 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-04-16 10:12:33.517982 | orchestrator | Thursday 16 April 2026 10:11:52 +0000 (0:00:00.753) 0:00:01.882 ******** 2026-04-16 10:12:33.517993 | orchestrator | ok: [testbed-manager] 2026-04-16 10:12:33.518004 | orchestrator | 2026-04-16 10:12:33.518082 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-04-16 10:12:33.518096 | orchestrator | Thursday 16 April 2026 10:11:55 +0000 (0:00:03.373) 0:00:05.256 ******** 2026-04-16 10:12:33.518177 | orchestrator | ok: [testbed-manager] 2026-04-16 10:12:33.518192 | orchestrator | 2026-04-16 10:12:33.518204 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-04-16 10:12:33.518215 | orchestrator | 2026-04-16 10:12:33.518226 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-04-16 10:12:33.518238 | orchestrator | Thursday 16 April 2026 10:11:59 +0000 (0:00:03.937) 0:00:09.194 ******** 2026-04-16 10:12:33.518251 | orchestrator | ok: [testbed-manager] 2026-04-16 10:12:33.518263 | orchestrator | 2026-04-16 10:12:33.518275 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-04-16 10:12:33.518288 | orchestrator | Thursday 16 April 2026 10:12:00 +0000 (0:00:01.190) 0:00:10.385 ******** 2026-04-16 10:12:33.518300 | orchestrator | ok: [testbed-manager] 2026-04-16 10:12:33.518312 | orchestrator | 2026-04-16 10:12:33.518324 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-04-16 10:12:33.518337 | orchestrator | Thursday 16 April 2026 10:12:00 +0000 (0:00:00.264) 0:00:10.649 ******** 2026-04-16 10:12:33.518349 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:12:33.518361 | orchestrator | 2026-04-16 10:12:33.518372 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-04-16 10:12:33.518398 | orchestrator | Thursday 16 April 2026 10:12:01 +0000 (0:00:00.148) 0:00:10.797 ******** 2026-04-16 10:12:33.518410 | orchestrator | ok: [testbed-manager] 2026-04-16 10:12:33.518423 | orchestrator | 2026-04-16 10:12:33.518435 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-04-16 10:12:33.518447 | orchestrator | Thursday 16 April 2026 10:12:30 +0000 (0:00:29.425) 0:00:40.223 ******** 2026-04-16 10:12:33.518459 | orchestrator | changed: [testbed-manager] 2026-04-16 10:12:33.518471 | orchestrator | 2026-04-16 10:12:33.518492 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:12:33.518513 | orchestrator | testbed-manager : ok=7  changed=1  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-04-16 10:12:33.518563 | orchestrator | 2026-04-16 10:12:33.518583 | orchestrator | 2026-04-16 10:12:33.518602 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:12:33.518621 | orchestrator | Thursday 16 April 2026 10:12:33 +0000 (0:00:02.706) 0:00:42.930 ******** 2026-04-16 10:12:33.518641 | orchestrator | =============================================================================== 2026-04-16 10:12:33.518677 | orchestrator | Upgrade the CAPI management cluster ------------------------------------ 29.43s 2026-04-16 10:12:33.518699 | orchestrator | cert_manager : Deploy cert-manager -------------------------------------- 3.94s 2026-04-16 10:12:33.518715 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 3.37s 2026-04-16 10:12:33.518726 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.71s 2026-04-16 10:12:33.518737 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.19s 2026-04-16 10:12:33.518748 | orchestrator | Include cert_manager role ----------------------------------------------- 0.75s 2026-04-16 10:12:33.518758 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.26s 2026-04-16 10:12:33.518769 | orchestrator | Initialize the CAPI management cluster ---------------------------------- 0.15s 2026-04-16 10:12:33.680348 | orchestrator | + osism apply -a upgrade magnum 2026-04-16 10:12:34.943391 | orchestrator | 2026-04-16 10:12:34 | INFO  | Prepare task for execution of magnum. 2026-04-16 10:12:35.009507 | orchestrator | 2026-04-16 10:12:35 | INFO  | Task aa83408a-9318-4ed0-b04f-61834f23428d (magnum) was prepared for execution. 2026-04-16 10:12:35.009607 | orchestrator | 2026-04-16 10:12:35 | INFO  | It takes a moment until task aa83408a-9318-4ed0-b04f-61834f23428d (magnum) has been started and output is visible here. 2026-04-16 10:12:54.437267 | orchestrator | 2026-04-16 10:12:54.437350 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 10:12:54.437358 | orchestrator | 2026-04-16 10:12:54.437364 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 10:12:54.437369 | orchestrator | Thursday 16 April 2026 10:12:39 +0000 (0:00:01.357) 0:00:01.357 ******** 2026-04-16 10:12:54.437374 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:12:54.437381 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:12:54.437385 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:12:54.437390 | orchestrator | 2026-04-16 10:12:54.437395 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 10:12:54.437400 | orchestrator | Thursday 16 April 2026 10:12:41 +0000 (0:00:01.939) 0:00:03.297 ******** 2026-04-16 10:12:54.437405 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-04-16 10:12:54.437410 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-04-16 10:12:54.437414 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-04-16 10:12:54.437419 | orchestrator | 2026-04-16 10:12:54.437423 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-04-16 10:12:54.437428 | orchestrator | 2026-04-16 10:12:54.437432 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-16 10:12:54.437437 | orchestrator | Thursday 16 April 2026 10:12:43 +0000 (0:00:02.267) 0:00:05.565 ******** 2026-04-16 10:12:54.437442 | orchestrator | included: /ansible/roles/magnum/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:12:54.437447 | orchestrator | 2026-04-16 10:12:54.437452 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-04-16 10:12:54.437457 | orchestrator | Thursday 16 April 2026 10:12:46 +0000 (0:00:02.653) 0:00:08.218 ******** 2026-04-16 10:12:54.437465 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:12:54.437502 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:12:54.437520 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:12:54.437526 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:12:54.437532 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:12:54.437541 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:12:54.437546 | orchestrator | 2026-04-16 10:12:54.437551 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-04-16 10:12:54.437556 | orchestrator | Thursday 16 April 2026 10:12:48 +0000 (0:00:02.567) 0:00:10.786 ******** 2026-04-16 10:12:54.437560 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:12:54.437566 | orchestrator | 2026-04-16 10:12:54.437570 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-04-16 10:12:54.437575 | orchestrator | Thursday 16 April 2026 10:12:49 +0000 (0:00:01.073) 0:00:11.859 ******** 2026-04-16 10:12:54.437579 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:12:54.437584 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:12:54.437588 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:12:54.437593 | orchestrator | 2026-04-16 10:12:54.437600 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-04-16 10:12:54.437605 | orchestrator | Thursday 16 April 2026 10:12:50 +0000 (0:00:01.264) 0:00:13.124 ******** 2026-04-16 10:12:54.437610 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-04-16 10:12:54.437614 | orchestrator | 2026-04-16 10:12:54.437619 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-04-16 10:12:54.437624 | orchestrator | Thursday 16 April 2026 10:12:53 +0000 (0:00:02.065) 0:00:15.189 ******** 2026-04-16 10:12:54.437632 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:01.785262 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:01.785382 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:01.785410 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:01.785419 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:01.785444 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:01.785452 | orchestrator | 2026-04-16 10:13:01.785460 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-04-16 10:13:01.785468 | orchestrator | Thursday 16 April 2026 10:12:56 +0000 (0:00:03.588) 0:00:18.778 ******** 2026-04-16 10:13:01.785475 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:13:01.785488 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:13:01.785493 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:13:01.785499 | orchestrator | 2026-04-16 10:13:01.785506 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-04-16 10:13:01.785512 | orchestrator | Thursday 16 April 2026 10:12:57 +0000 (0:00:01.309) 0:00:20.087 ******** 2026-04-16 10:13:01.785519 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:13:01.785525 | orchestrator | 2026-04-16 10:13:01.785531 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-04-16 10:13:01.785537 | orchestrator | Thursday 16 April 2026 10:12:59 +0000 (0:00:01.792) 0:00:21.880 ******** 2026-04-16 10:13:01.785543 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:01.785554 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:01.785561 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:01.785574 | orchestrator | ok: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:05.315601 | orchestrator | ok: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:05.315713 | orchestrator | ok: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:05.315730 | orchestrator | 2026-04-16 10:13:05.315744 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-04-16 10:13:05.315761 | orchestrator | Thursday 16 April 2026 10:13:03 +0000 (0:00:03.396) 0:00:25.277 ******** 2026-04-16 10:13:05.315807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:05.315831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:05.315882 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:13:05.315932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:05.315955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:05.315974 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:13:05.316002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:05.316022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:05.316054 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:13:05.316076 | orchestrator | 2026-04-16 10:13:05.316097 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-04-16 10:13:05.316117 | orchestrator | Thursday 16 April 2026 10:13:04 +0000 (0:00:01.759) 0:00:27.037 ******** 2026-04-16 10:13:05.316154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:09.291773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:09.291895 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:13:09.291913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:09.291942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:09.291980 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:13:09.291995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:09.292030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:09.292046 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:13:09.292060 | orchestrator | 2026-04-16 10:13:09.292075 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-04-16 10:13:09.292100 | orchestrator | Thursday 16 April 2026 10:13:06 +0000 (0:00:02.113) 0:00:29.150 ******** 2026-04-16 10:13:09.292110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:09.292125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:09.292150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:09.292177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:17.059805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:17.059924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:17.059940 | orchestrator | 2026-04-16 10:13:17.059951 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-04-16 10:13:17.059961 | orchestrator | Thursday 16 April 2026 10:13:10 +0000 (0:00:03.393) 0:00:32.543 ******** 2026-04-16 10:13:17.059973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:17.060007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:17.060037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:17.060047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:17.060060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:17.060075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:17.060083 | orchestrator | 2026-04-16 10:13:17.060091 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-04-16 10:13:17.060099 | orchestrator | Thursday 16 April 2026 10:13:16 +0000 (0:00:06.287) 0:00:38.831 ******** 2026-04-16 10:13:17.060113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:21.267484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:21.267593 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:13:21.267628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:21.267683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:21.267708 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:13:21.267721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:21.267753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:21.267766 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:13:21.267777 | orchestrator | 2026-04-16 10:13:21.267790 | orchestrator | TASK [service-check-containers : magnum | Check containers] ******************** 2026-04-16 10:13:21.267802 | orchestrator | Thursday 16 April 2026 10:13:18 +0000 (0:00:02.226) 0:00:41.057 ******** 2026-04-16 10:13:21.267820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:21.267841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:21.267854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-04-16 10:13:21.267875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:49.348812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:49.348946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-04-16 10:13:49.348960 | orchestrator | 2026-04-16 10:13:49.348970 | orchestrator | TASK [service-check-containers : magnum | Notify handlers to restart containers] *** 2026-04-16 10:13:49.348979 | orchestrator | Thursday 16 April 2026 10:13:22 +0000 (0:00:03.706) 0:00:44.764 ******** 2026-04-16 10:13:49.348987 | orchestrator | changed: [testbed-node-0] => { 2026-04-16 10:13:49.348996 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:13:49.349003 | orchestrator | } 2026-04-16 10:13:49.349011 | orchestrator | changed: [testbed-node-1] => { 2026-04-16 10:13:49.349018 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:13:49.349025 | orchestrator | } 2026-04-16 10:13:49.349032 | orchestrator | changed: [testbed-node-2] => { 2026-04-16 10:13:49.349040 | orchestrator |  "msg": "Notifying handlers" 2026-04-16 10:13:49.349047 | orchestrator | } 2026-04-16 10:13:49.349054 | orchestrator | 2026-04-16 10:13:49.349062 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-04-16 10:13:49.349070 | orchestrator | Thursday 16 April 2026 10:13:23 +0000 (0:00:01.324) 0:00:46.089 ******** 2026-04-16 10:13:49.349086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:49.349102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:49.349115 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:13:49.349149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:49.349182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:49.349194 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:13:49.349202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-04-16 10:13:49.349210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-04-16 10:13:49.349218 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:13:49.349226 | orchestrator | 2026-04-16 10:13:49.349233 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-04-16 10:13:49.349241 | orchestrator | Thursday 16 April 2026 10:13:26 +0000 (0:00:02.074) 0:00:48.164 ******** 2026-04-16 10:13:49.349248 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:13:49.349255 | orchestrator | 2026-04-16 10:13:49.349262 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-16 10:13:49.349275 | orchestrator | Thursday 16 April 2026 10:13:48 +0000 (0:00:22.890) 0:01:11.054 ******** 2026-04-16 10:13:49.349282 | orchestrator | 2026-04-16 10:13:49.349290 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-16 10:13:49.349303 | orchestrator | Thursday 16 April 2026 10:13:49 +0000 (0:00:00.435) 0:01:11.490 ******** 2026-04-16 10:14:35.795746 | orchestrator | 2026-04-16 10:14:35.795894 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-04-16 10:14:35.795915 | orchestrator | Thursday 16 April 2026 10:13:49 +0000 (0:00:00.428) 0:01:11.919 ******** 2026-04-16 10:14:35.795927 | orchestrator | 2026-04-16 10:14:35.795939 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-04-16 10:14:35.795950 | orchestrator | Thursday 16 April 2026 10:13:50 +0000 (0:00:00.797) 0:01:12.716 ******** 2026-04-16 10:14:35.795962 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:14:35.795974 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:14:35.795985 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:14:35.795996 | orchestrator | 2026-04-16 10:14:35.796007 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-04-16 10:14:35.796019 | orchestrator | Thursday 16 April 2026 10:14:12 +0000 (0:00:21.664) 0:01:34.381 ******** 2026-04-16 10:14:35.796030 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:14:35.796041 | orchestrator | changed: [testbed-node-2] 2026-04-16 10:14:35.796052 | orchestrator | changed: [testbed-node-1] 2026-04-16 10:14:35.796062 | orchestrator | 2026-04-16 10:14:35.796073 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:14:35.796085 | orchestrator | testbed-node-0 : ok=16  changed=7  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-04-16 10:14:35.796115 | orchestrator | testbed-node-1 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 10:14:35.796127 | orchestrator | testbed-node-2 : ok=14  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-04-16 10:14:35.796138 | orchestrator | 2026-04-16 10:14:35.796149 | orchestrator | 2026-04-16 10:14:35.796160 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:14:35.796171 | orchestrator | Thursday 16 April 2026 10:14:35 +0000 (0:00:23.302) 0:01:57.683 ******** 2026-04-16 10:14:35.796182 | orchestrator | =============================================================================== 2026-04-16 10:14:35.796192 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 23.30s 2026-04-16 10:14:35.796203 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 22.89s 2026-04-16 10:14:35.796214 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.66s 2026-04-16 10:14:35.796225 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.29s 2026-04-16 10:14:35.796236 | orchestrator | service-check-containers : magnum | Check containers -------------------- 3.71s 2026-04-16 10:14:35.796247 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.59s 2026-04-16 10:14:35.796259 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.40s 2026-04-16 10:14:35.796271 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.39s 2026-04-16 10:14:35.796284 | orchestrator | magnum : include_tasks -------------------------------------------------- 2.65s 2026-04-16 10:14:35.796296 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.57s 2026-04-16 10:14:35.796308 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.27s 2026-04-16 10:14:35.796320 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.23s 2026-04-16 10:14:35.796333 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.11s 2026-04-16 10:14:35.796373 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.07s 2026-04-16 10:14:35.796386 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.06s 2026-04-16 10:14:35.796398 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.94s 2026-04-16 10:14:35.796410 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.79s 2026-04-16 10:14:35.796423 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 1.76s 2026-04-16 10:14:35.796436 | orchestrator | magnum : Flush handlers ------------------------------------------------- 1.66s 2026-04-16 10:14:35.796449 | orchestrator | service-check-containers : magnum | Notify handlers to restart containers --- 1.33s 2026-04-16 10:14:36.779714 | orchestrator | ok: Runtime: 3:13:36.816712 2026-04-16 10:14:37.347370 | 2026-04-16 10:14:37.347477 | TASK [Bootstrap services] 2026-04-16 10:14:37.890016 | orchestrator | skipping: Conditional result was False 2026-04-16 10:14:37.905251 | 2026-04-16 10:14:37.905355 | TASK [Run checks after the upgrade] 2026-04-16 10:14:38.595364 | orchestrator | + set -e 2026-04-16 10:14:38.595523 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 10:14:38.595536 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 10:14:38.595546 | orchestrator | ++ INTERACTIVE=false 2026-04-16 10:14:38.595552 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 10:14:38.595557 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 10:14:38.595564 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 10:14:38.596824 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 10:14:38.602830 | orchestrator | 2026-04-16 10:14:38.602896 | orchestrator | # CHECK 2026-04-16 10:14:38.602906 | orchestrator | 2026-04-16 10:14:38.602914 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-16 10:14:38.602925 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-16 10:14:38.602933 | orchestrator | + echo 2026-04-16 10:14:38.602940 | orchestrator | + echo '# CHECK' 2026-04-16 10:14:38.602947 | orchestrator | + echo 2026-04-16 10:14:38.602959 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 10:14:38.603871 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-16 10:14:38.663539 | orchestrator | 2026-04-16 10:14:38.663645 | orchestrator | ## Containers @ testbed-manager 2026-04-16 10:14:38.663663 | orchestrator | 2026-04-16 10:14:38.663677 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 10:14:38.663689 | orchestrator | + echo 2026-04-16 10:14:38.663701 | orchestrator | + echo '## Containers @ testbed-manager' 2026-04-16 10:14:38.663712 | orchestrator | + echo 2026-04-16 10:14:38.663724 | orchestrator | + osism container testbed-manager ps 2026-04-16 10:14:40.059942 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 10:14:40.060065 | orchestrator | ae9452817018 registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20260328 "dumb-init --single-…" 5 minutes ago Up 5 minutes prometheus_blackbox_exporter 2026-04-16 10:14:40.060088 | orchestrator | f630269409b3 registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_alertmanager 2026-04-16 10:14:40.060099 | orchestrator | 40ece9869419 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-16 10:14:40.060109 | orchestrator | 749f0ebaf08b registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-16 10:14:40.060120 | orchestrator | 90822cde4ff4 registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_server 2026-04-16 10:14:40.060131 | orchestrator | 55949ed41047 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-16 10:14:40.060146 | orchestrator | c12b614ed346 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-16 10:14:40.060157 | orchestrator | 331e85273e5f registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-16 10:14:40.060190 | orchestrator | d6195470620d registry.osism.tech/osism/openstackclient:2025.1 "/usr/bin/dumb-init …" 3 hours ago Up 3 hours openstackclient 2026-04-16 10:14:40.060201 | orchestrator | a39cff881ea9 registry.osism.tech/osism/inventory-reconciler:0.20260322.0 "/sbin/tini -- /entr…" 3 hours ago Up 3 hours (healthy) manager-inventory_reconciler-1 2026-04-16 10:14:40.060211 | orchestrator | 3ca1de3e4e6e registry.osism.tech/osism/ceph-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) ceph-ansible 2026-04-16 10:14:40.060221 | orchestrator | 9c992040c127 registry.osism.tech/osism/osism-ansible:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-ansible 2026-04-16 10:14:40.060231 | orchestrator | 0efb5d9668e6 registry.osism.tech/osism/kolla-ansible:0.20260328.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) kolla-ansible 2026-04-16 10:14:40.060265 | orchestrator | e6aa14202cb0 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- sleep…" 3 hours ago Up 3 hours (healthy) osismclient 2026-04-16 10:14:40.060276 | orchestrator | 9083618872e9 registry.osism.tech/osism/osism-kubernetes:0.20260322.0 "/entrypoint.sh osis…" 3 hours ago Up 3 hours (healthy) osism-kubernetes 2026-04-16 10:14:40.060286 | orchestrator | 9f6b7bb11614 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-openstack-1 2026-04-16 10:14:40.060297 | orchestrator | c40e5133fafc registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up About an hour (healthy) manager-listener-1 2026-04-16 10:14:40.060307 | orchestrator | feb8cf13fb23 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-04-16 10:14:40.060317 | orchestrator | 7f248a3f2d27 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-beat-1 2026-04-16 10:14:40.060327 | orchestrator | b7d20b616b91 registry.osism.tech/osism/osism-frontend:0.20260320.0 "docker-entrypoint.s…" 3 hours ago Up 3 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-04-16 10:14:40.060337 | orchestrator | aafadc07ffd2 registry.osism.tech/osism/osism:0.20260320.0 "/sbin/tini -- osism…" 3 hours ago Up 3 hours (healthy) manager-flower-1 2026-04-16 10:14:40.060347 | orchestrator | b94deb5e8679 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 4 hours ago Up 4 hours cephclient 2026-04-16 10:14:40.060364 | orchestrator | 8c226b4691a8 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 5 hours ago Up 5 hours (healthy) 80/tcp phpmyadmin 2026-04-16 10:14:40.060374 | orchestrator | 6b19df446ac2 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 5 hours ago Up 5 hours (healthy) 8080/tcp homer 2026-04-16 10:14:40.060384 | orchestrator | dab2e9c0b477 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 5 hours ago Up 5 hours 80/tcp cgit 2026-04-16 10:14:40.060394 | orchestrator | d9b6a90be2b5 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 5 hours ago Up 5 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-04-16 10:14:40.060409 | orchestrator | 3cdfa73f33b7 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 5 hours ago Up 3 hours (healthy) 8000/tcp manager-ara-server-1 2026-04-16 10:14:40.060419 | orchestrator | 16fed27a60ab registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 5 hours ago Up 3 hours (healthy) 3306/tcp manager-mariadb-1 2026-04-16 10:14:40.060429 | orchestrator | d240ff99de54 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 5 hours ago Up 3 hours (healthy) 6379/tcp manager-redis-1 2026-04-16 10:14:40.060446 | orchestrator | 76ebafedf5b8 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 5 hours ago Up 5 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-04-16 10:14:40.189596 | orchestrator | 2026-04-16 10:14:40.189704 | orchestrator | ## Images @ testbed-manager 2026-04-16 10:14:40.189722 | orchestrator | 2026-04-16 10:14:40.189735 | orchestrator | + echo 2026-04-16 10:14:40.189747 | orchestrator | + echo '## Images @ testbed-manager' 2026-04-16 10:14:40.189758 | orchestrator | + echo 2026-04-16 10:14:40.189769 | orchestrator | + osism container testbed-manager images 2026-04-16 10:14:41.596274 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 10:14:41.596355 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 9e238fdcbaa6 6 hours ago 238MB 2026-04-16 10:14:41.596366 | orchestrator | registry.osism.tech/osism/openstackclient 2025.1 b8c485a7bc26 6 hours ago 212MB 2026-04-16 10:14:41.596374 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20260328.0 38f6ca42e9a0 2 weeks ago 635MB 2026-04-16 10:14:41.596382 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-16 10:14:41.596389 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-16 10:14:41.596398 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-16 10:14:41.596406 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter 0.25.0.20260328 1bf017fd7bf3 2 weeks ago 319MB 2026-04-16 10:14:41.596436 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager 0.28.1.20260328 d1986023a383 2 weeks ago 415MB 2026-04-16 10:14:41.596444 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-16 10:14:41.596452 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-server 3.2.1.20260328 4f5732d5eb69 2 weeks ago 860MB 2026-04-16 10:14:41.596459 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-16 10:14:41.596467 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20260322.0 3e18c5de9bc5 3 weeks ago 634MB 2026-04-16 10:14:41.596475 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20260322.0 c68c1f5728ae 3 weeks ago 1.24GB 2026-04-16 10:14:41.596483 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20260322.0 f6e7e0d58bb1 3 weeks ago 585MB 2026-04-16 10:14:41.596490 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20260322.0 9806642932fd 3 weeks ago 357MB 2026-04-16 10:14:41.596531 | orchestrator | registry.osism.tech/osism/osism 0.20260320.0 5d0420989a40 3 weeks ago 408MB 2026-04-16 10:14:41.596536 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20260320.0 80b833af5991 3 weeks ago 232MB 2026-04-16 10:14:41.596541 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 months ago 41.4MB 2026-04-16 10:14:41.596555 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 4 months ago 11.5MB 2026-04-16 10:14:41.596560 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 4 months ago 608MB 2026-04-16 10:14:41.596571 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 10:14:41.596576 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 10:14:41.596581 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 10:14:41.596586 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 4 months ago 308MB 2026-04-16 10:14:41.596590 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 10:14:41.596595 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 4 months ago 404MB 2026-04-16 10:14:41.596600 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 4 months ago 839MB 2026-04-16 10:14:41.596604 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 10:14:41.596609 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 4 months ago 330MB 2026-04-16 10:14:41.596614 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 4 months ago 613MB 2026-04-16 10:14:41.596618 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 4 months ago 560MB 2026-04-16 10:14:41.596639 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 4 months ago 1.23GB 2026-04-16 10:14:41.596644 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 4 months ago 383MB 2026-04-16 10:14:41.596649 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 4 months ago 238MB 2026-04-16 10:14:41.596659 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 5 months ago 334MB 2026-04-16 10:14:41.596664 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 6 months ago 742MB 2026-04-16 10:14:41.596668 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 7 months ago 275MB 2026-04-16 10:14:41.596673 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 months ago 226MB 2026-04-16 10:14:41.596678 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 11 months ago 453MB 2026-04-16 10:14:41.596682 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 22 months ago 146MB 2026-04-16 10:14:41.596690 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-04-16 10:14:41.724869 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 10:14:41.725424 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-16 10:14:41.782840 | orchestrator | 2026-04-16 10:14:41.782944 | orchestrator | ## Containers @ testbed-node-0 2026-04-16 10:14:41.782960 | orchestrator | 2026-04-16 10:14:41.782972 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 10:14:41.782984 | orchestrator | + echo 2026-04-16 10:14:41.782996 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-04-16 10:14:41.783008 | orchestrator | + echo 2026-04-16 10:14:41.783020 | orchestrator | + osism container testbed-node-0 ps 2026-04-16 10:14:43.223255 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 10:14:43.223363 | orchestrator | f664a4289660 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 11 seconds ago Up 9 seconds (health: starting) magnum_conductor 2026-04-16 10:14:43.223411 | orchestrator | 5968e672665d registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 43 seconds ago Up 41 seconds (healthy) magnum_api 2026-04-16 10:14:43.223424 | orchestrator | b75877d5767b registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 4 minutes ago Up 4 minutes grafana 2026-04-16 10:14:43.223436 | orchestrator | cf14c73ca476 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-16 10:14:43.223449 | orchestrator | d4ec4822489b registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-16 10:14:43.223460 | orchestrator | 999cf458611d registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-16 10:14:43.223471 | orchestrator | eac224ac36c7 registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-16 10:14:43.223483 | orchestrator | 62dce9d0dbd0 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-16 10:14:43.223494 | orchestrator | a327e98e8738 registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) manila_share 2026-04-16 10:14:43.223567 | orchestrator | e87addd03ec7 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) manila_scheduler 2026-04-16 10:14:43.223592 | orchestrator | 91627b975ed4 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-16 10:14:43.223605 | orchestrator | 65e4ecca1839 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_api 2026-04-16 10:14:43.223616 | orchestrator | fe7e75caa83b registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) octavia_worker 2026-04-16 10:14:43.223627 | orchestrator | 6bd293573a08 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_housekeeping 2026-04-16 10:14:43.223638 | orchestrator | 5077c2240b55 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_health_manager 2026-04-16 10:14:43.223649 | orchestrator | 29cf99429d56 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes octavia_driver_agent 2026-04-16 10:14:43.223660 | orchestrator | 48a8b63b5344 registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) octavia_api 2026-04-16 10:14:43.223688 | orchestrator | a13a2306ed8b registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_notifier 2026-04-16 10:14:43.223700 | orchestrator | 506d8087a731 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_listener 2026-04-16 10:14:43.223711 | orchestrator | f1fb81abd3b9 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-04-16 10:14:43.223722 | orchestrator | e3768276b034 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_api 2026-04-16 10:14:43.223745 | orchestrator | ad2c60f9c464 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-04-16 10:14:43.223757 | orchestrator | 6cf9d6fed9d9 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) ceilometer_notification 2026-04-16 10:14:43.223768 | orchestrator | a2aca80411a2 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_worker 2026-04-16 10:14:43.223779 | orchestrator | 879511ea7b56 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_mdns 2026-04-16 10:14:43.223789 | orchestrator | 2820feff5941 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 23 minutes (healthy) designate_producer 2026-04-16 10:14:43.223818 | orchestrator | faf7fcba2436 registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 24 minutes (healthy) designate_central 2026-04-16 10:14:43.223838 | orchestrator | 9d82fc71ebcf registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-04-16 10:14:43.223857 | orchestrator | e1fc7f4b6807 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_backend_bind9 2026-04-16 10:14:43.223877 | orchestrator | cb237cae74ad registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-04-16 10:14:43.223897 | orchestrator | 472cbbb18d57 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-04-16 10:14:43.223917 | orchestrator | 563e7aa2b356 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-04-16 10:14:43.223936 | orchestrator | 607da76991fe registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-16 10:14:43.223949 | orchestrator | d6dd268a06c1 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_volume 2026-04-16 10:14:43.223961 | orchestrator | 0c3850bf4c5d registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 33 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-04-16 10:14:43.223971 | orchestrator | 9181b5dda9f9 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 33 minutes ago Up 31 minutes (healthy) cinder_api 2026-04-16 10:14:43.223983 | orchestrator | 5ab97af7ab6b registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) glance_api 2026-04-16 10:14:43.224002 | orchestrator | 7f6f33141525 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) skyline_console 2026-04-16 10:14:43.224014 | orchestrator | 3f3fc581f0cf registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) skyline_apiserver 2026-04-16 10:14:43.224025 | orchestrator | ba2d0bf98769 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) horizon 2026-04-16 10:14:43.224036 | orchestrator | be0df0ade77b registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 55 minutes ago Up 46 minutes (healthy) nova_novncproxy 2026-04-16 10:14:43.224047 | orchestrator | e23c5953ff7e registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 56 minutes ago Up 46 minutes (healthy) nova_conductor 2026-04-16 10:14:43.224058 | orchestrator | 79a4a1a26dac registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) nova_metadata 2026-04-16 10:14:43.224077 | orchestrator | b4d2887f6d8c registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 45 minutes (healthy) nova_api 2026-04-16 10:14:43.224088 | orchestrator | 4bb3d6d8157e registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 45 minutes (healthy) nova_scheduler 2026-04-16 10:14:43.224099 | orchestrator | 290cd82ec39b registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-16 10:14:43.224110 | orchestrator | b5ccd13a55e1 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-16 10:14:43.224127 | orchestrator | a75b7263b9d8 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-16 10:14:43.224138 | orchestrator | e1bf1ddfc761 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-16 10:14:43.224149 | orchestrator | c1cfed7ef6a6 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-16 10:14:43.224160 | orchestrator | b8d6a82fd235 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-04-16 10:14:43.224172 | orchestrator | 8779b06cda6d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-0 2026-04-16 10:14:43.224182 | orchestrator | 73554beccbed registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-0 2026-04-16 10:14:43.224193 | orchestrator | 544a544b5f8c registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-16 10:14:43.224204 | orchestrator | 649568545de2 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-16 10:14:43.224215 | orchestrator | 24f2984b1927 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-16 10:14:43.224233 | orchestrator | 6136cd232f1a registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-16 10:14:43.224244 | orchestrator | 102125300f96 registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-16 10:14:43.224256 | orchestrator | 240ec16d3689 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-16 10:14:43.224267 | orchestrator | 97ba9e286533 registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-16 10:14:43.224284 | orchestrator | 2313c90ae69f registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-16 10:14:43.224295 | orchestrator | 33c5762d37fb registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-16 10:14:43.224311 | orchestrator | d54b1e50cdc0 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-16 10:14:43.224331 | orchestrator | e7ab9c516d9f registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-16 10:14:43.224349 | orchestrator | 3b6aca84e2e1 registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-16 10:14:43.224384 | orchestrator | 950bb743732d registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-16 10:14:43.224404 | orchestrator | 362a448d33be registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-16 10:14:43.224418 | orchestrator | f9807a917bbe registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-16 10:14:43.224429 | orchestrator | 1d42713fc865 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-16 10:14:43.224445 | orchestrator | d31589ecebad registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-16 10:14:43.224464 | orchestrator | cf3212ac2ee3 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-16 10:14:43.224483 | orchestrator | a8743461cc4c registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-16 10:14:43.224526 | orchestrator | 853a94ff39ee registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-16 10:14:43.356036 | orchestrator | 2026-04-16 10:14:43.356131 | orchestrator | ## Images @ testbed-node-0 2026-04-16 10:14:43.356147 | orchestrator | 2026-04-16 10:14:43.356158 | orchestrator | + echo 2026-04-16 10:14:43.356170 | orchestrator | + echo '## Images @ testbed-node-0' 2026-04-16 10:14:43.356182 | orchestrator | + echo 2026-04-16 10:14:43.356193 | orchestrator | + osism container testbed-node-0 images 2026-04-16 10:14:44.926571 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 10:14:44.926710 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 2 weeks ago 288MB 2026-04-16 10:14:44.926734 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 2 weeks ago 1.54GB 2026-04-16 10:14:44.926783 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 2 weeks ago 1.57GB 2026-04-16 10:14:44.926801 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-16 10:14:44.926818 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 2 weeks ago 277MB 2026-04-16 10:14:44.926835 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 2 weeks ago 1.04GB 2026-04-16 10:14:44.926852 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 2 weeks ago 427MB 2026-04-16 10:14:44.926974 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 2 weeks ago 350MB 2026-04-16 10:14:44.926995 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-16 10:14:44.927034 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-16 10:14:44.927053 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 2 weeks ago 285MB 2026-04-16 10:14:44.927072 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 2 weeks ago 293MB 2026-04-16 10:14:44.927090 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 2 weeks ago 293MB 2026-04-16 10:14:44.927108 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 2 weeks ago 284MB 2026-04-16 10:14:44.927127 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 2 weeks ago 284MB 2026-04-16 10:14:44.927145 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 2 weeks ago 1.2GB 2026-04-16 10:14:44.927165 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 2 weeks ago 463MB 2026-04-16 10:14:44.927183 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 2 weeks ago 309MB 2026-04-16 10:14:44.927202 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-16 10:14:44.927219 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 2 weeks ago 303MB 2026-04-16 10:14:44.927236 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 2 weeks ago 312MB 2026-04-16 10:14:44.927252 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-16 10:14:44.927269 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 2 weeks ago 301MB 2026-04-16 10:14:44.927286 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 2 weeks ago 301MB 2026-04-16 10:14:44.927336 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 2 weeks ago 301MB 2026-04-16 10:14:44.927357 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 2 weeks ago 301MB 2026-04-16 10:14:44.927376 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 2 weeks ago 1.09GB 2026-04-16 10:14:44.927412 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 2 weeks ago 1.06GB 2026-04-16 10:14:44.927431 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 2 weeks ago 1.05GB 2026-04-16 10:14:44.927449 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 2 weeks ago 997MB 2026-04-16 10:14:44.927468 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 2 weeks ago 996MB 2026-04-16 10:14:44.927490 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 2 weeks ago 1.07GB 2026-04-16 10:14:44.927554 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 2 weeks ago 1.07GB 2026-04-16 10:14:44.927576 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 2 weeks ago 1.05GB 2026-04-16 10:14:44.927596 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 2 weeks ago 1.05GB 2026-04-16 10:14:44.927616 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 2 weeks ago 1.05GB 2026-04-16 10:14:44.927636 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 2 weeks ago 996MB 2026-04-16 10:14:44.927654 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 2 weeks ago 995MB 2026-04-16 10:14:44.927672 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 2 weeks ago 995MB 2026-04-16 10:14:44.927700 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 2 weeks ago 995MB 2026-04-16 10:14:44.927719 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 2 weeks ago 994MB 2026-04-16 10:14:44.927738 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 2 weeks ago 1.12GB 2026-04-16 10:14:44.927758 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 2 weeks ago 1.79GB 2026-04-16 10:14:44.927778 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 2 weeks ago 1.43GB 2026-04-16 10:14:44.927796 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 2 weeks ago 1.43GB 2026-04-16 10:14:44.927814 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 2 weeks ago 1.44GB 2026-04-16 10:14:44.927833 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 2 weeks ago 1.24GB 2026-04-16 10:14:44.927853 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 2 weeks ago 1.07GB 2026-04-16 10:14:44.927873 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 2 weeks ago 1.02GB 2026-04-16 10:14:44.927893 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 2 weeks ago 1GB 2026-04-16 10:14:44.927912 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 2 weeks ago 1GB 2026-04-16 10:14:44.927931 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 2 weeks ago 1GB 2026-04-16 10:14:44.927963 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 2 weeks ago 1.27GB 2026-04-16 10:14:44.927981 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 2 weeks ago 1.15GB 2026-04-16 10:14:44.928000 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 2 weeks ago 1.01GB 2026-04-16 10:14:44.928039 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 2 weeks ago 1GB 2026-04-16 10:14:44.928064 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 2 weeks ago 1GB 2026-04-16 10:14:44.928084 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 2 weeks ago 1.01GB 2026-04-16 10:14:44.928102 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 2 weeks ago 1GB 2026-04-16 10:14:44.928119 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 2 weeks ago 1GB 2026-04-16 10:14:44.928136 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 2 weeks ago 1.23GB 2026-04-16 10:14:44.928156 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 2 weeks ago 1.39GB 2026-04-16 10:14:44.928173 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 2 weeks ago 1.23GB 2026-04-16 10:14:44.928189 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 2 weeks ago 1.23GB 2026-04-16 10:14:44.928206 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 2 weeks ago 1.07GB 2026-04-16 10:14:44.928224 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 2 weeks ago 1.07GB 2026-04-16 10:14:44.928242 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 2 weeks ago 1.07GB 2026-04-16 10:14:44.928261 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 2 weeks ago 1.24GB 2026-04-16 10:14:44.928279 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 2 weeks ago 301MB 2026-04-16 10:14:44.928296 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-16 10:14:44.928316 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-16 10:14:44.928333 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-16 10:14:44.928352 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-16 10:14:44.928383 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-16 10:14:44.928401 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 10:14:44.928418 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 10:14:44.928429 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-16 10:14:44.928440 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-16 10:14:44.928462 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-16 10:14:44.928473 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 10:14:44.928484 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-16 10:14:44.928495 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-16 10:14:44.928539 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-16 10:14:44.928551 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-16 10:14:44.928562 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-16 10:14:44.928586 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-16 10:14:44.928597 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 10:14:44.928608 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-16 10:14:44.928619 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 10:14:44.928630 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-16 10:14:44.928641 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-16 10:14:44.928652 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-16 10:14:44.928663 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-16 10:14:44.928674 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-16 10:14:44.928684 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-16 10:14:44.928695 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-16 10:14:44.928707 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-16 10:14:44.928725 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-16 10:14:44.928742 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-16 10:14:44.928760 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-16 10:14:44.928777 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-16 10:14:44.928795 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-16 10:14:44.928812 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-16 10:14:44.928842 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-16 10:14:44.928863 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-16 10:14:44.928881 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-16 10:14:44.928900 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-16 10:14:44.928912 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-16 10:14:44.928922 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-16 10:14:44.928933 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-16 10:14:44.928944 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-16 10:14:44.928963 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-16 10:14:44.928974 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-16 10:14:44.928985 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-16 10:14:44.929001 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-16 10:14:44.929012 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-16 10:14:44.929032 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-16 10:14:44.929043 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-16 10:14:44.929054 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-16 10:14:44.929065 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-16 10:14:44.929076 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-16 10:14:44.929087 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-16 10:14:44.929098 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-16 10:14:44.929111 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-16 10:14:44.929129 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-16 10:14:44.929157 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-16 10:14:44.929176 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-16 10:14:44.929194 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-16 10:14:44.929227 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-16 10:14:44.929245 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-16 10:14:44.929261 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-16 10:14:44.929278 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-16 10:14:44.929294 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-16 10:14:44.929310 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-16 10:14:44.929328 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-16 10:14:44.929345 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-16 10:14:44.929363 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-16 10:14:44.929382 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-16 10:14:45.067782 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 10:14:45.068000 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-16 10:14:45.114112 | orchestrator | 2026-04-16 10:14:45.114204 | orchestrator | ## Containers @ testbed-node-1 2026-04-16 10:14:45.114215 | orchestrator | 2026-04-16 10:14:45.114222 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 10:14:45.114228 | orchestrator | + echo 2026-04-16 10:14:45.114236 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-04-16 10:14:45.114244 | orchestrator | + echo 2026-04-16 10:14:45.114250 | orchestrator | + osism container testbed-node-1 ps 2026-04-16 10:14:46.626862 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 10:14:46.626993 | orchestrator | 2983b7f21b92 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 13 seconds ago Up 12 seconds (health: starting) magnum_conductor 2026-04-16 10:14:46.627023 | orchestrator | 68198a64efcd registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 36 seconds ago Up 35 seconds (healthy) magnum_api 2026-04-16 10:14:46.627042 | orchestrator | 04c86cc60480 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-16 10:14:46.627062 | orchestrator | 905efdabf7c1 registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-16 10:14:46.627084 | orchestrator | c2e70d9c50a1 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-16 10:14:46.627104 | orchestrator | f7128c50c27b registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-16 10:14:46.627116 | orchestrator | 1fdd457cd10a registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-16 10:14:46.627153 | orchestrator | d42e62e5bd16 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-16 10:14:46.627182 | orchestrator | eb377e63fb8e registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) manila_share 2026-04-16 10:14:46.627194 | orchestrator | e5de431988f4 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-16 10:14:46.627205 | orchestrator | 6f9eb1263d77 registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-16 10:14:46.627216 | orchestrator | 02665e689858 registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_api 2026-04-16 10:14:46.627227 | orchestrator | 8e74b0137964 registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_worker 2026-04-16 10:14:46.627238 | orchestrator | c32bdba5f4c2 registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_housekeeping 2026-04-16 10:14:46.627249 | orchestrator | a5015e025422 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_health_manager 2026-04-16 10:14:46.627259 | orchestrator | 10e50fd70b9f registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes octavia_driver_agent 2026-04-16 10:14:46.627270 | orchestrator | ee1cf676bbfa registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_api 2026-04-16 10:14:46.627296 | orchestrator | 7073cd757f2c registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_notifier 2026-04-16 10:14:46.627308 | orchestrator | f3952c86d4c2 registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_listener 2026-04-16 10:14:46.627319 | orchestrator | 5ea5e2b1429a registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-04-16 10:14:46.627330 | orchestrator | 5c72485fcb74 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_api 2026-04-16 10:14:46.627341 | orchestrator | 68d5c216b79f registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-04-16 10:14:46.627352 | orchestrator | 016c6e944a62 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) ceilometer_notification 2026-04-16 10:14:46.627362 | orchestrator | 318c4cc3f527 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_worker 2026-04-16 10:14:46.627386 | orchestrator | 0d6374aea486 registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_mdns 2026-04-16 10:14:46.627404 | orchestrator | d38f5ac3a5d3 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_producer 2026-04-16 10:14:46.627417 | orchestrator | e43edc50d04a registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-04-16 10:14:46.627430 | orchestrator | 19b8f4d21e69 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-04-16 10:14:46.627458 | orchestrator | 442bf63b9db5 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_backend_bind9 2026-04-16 10:14:46.627470 | orchestrator | 015cc033fe17 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-04-16 10:14:46.627482 | orchestrator | 4ea2b94defb3 registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-04-16 10:14:46.627495 | orchestrator | bbd117648646 registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-04-16 10:14:46.627536 | orchestrator | 99f91ffe5458 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-16 10:14:46.627556 | orchestrator | 6e53d2311c9b registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_volume 2026-04-16 10:14:46.627567 | orchestrator | 4a2e5a1b33b8 registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 33 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-04-16 10:14:46.627578 | orchestrator | 18325c9f894b registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 33 minutes ago Up 31 minutes (healthy) cinder_api 2026-04-16 10:14:46.627599 | orchestrator | a7454e6f0329 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) glance_api 2026-04-16 10:14:46.627611 | orchestrator | a86286ed2705 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) skyline_console 2026-04-16 10:14:46.627622 | orchestrator | d658415f6a39 registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) skyline_apiserver 2026-04-16 10:14:46.627646 | orchestrator | f10814b484f1 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) horizon 2026-04-16 10:14:46.627666 | orchestrator | c1448b44fe05 registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 55 minutes ago Up 46 minutes (healthy) nova_novncproxy 2026-04-16 10:14:46.627677 | orchestrator | 13f1a90259a9 registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 56 minutes ago Up 46 minutes (healthy) nova_conductor 2026-04-16 10:14:46.627688 | orchestrator | 8e7632390de0 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) nova_metadata 2026-04-16 10:14:46.627699 | orchestrator | ca8b18450fb6 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 45 minutes (healthy) nova_api 2026-04-16 10:14:46.627710 | orchestrator | 7f1832fa6b7b registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-16 10:14:46.627721 | orchestrator | 4bdb09f3b0d9 registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-16 10:14:46.627732 | orchestrator | bb7dfb7e15ef registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-16 10:14:46.627743 | orchestrator | f9ecc28c7e2d registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-16 10:14:46.627754 | orchestrator | 5e4fc2be5a18 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-16 10:14:46.627765 | orchestrator | 92521304537b registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-16 10:14:46.627776 | orchestrator | 640bd0e187d1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-04-16 10:14:46.627787 | orchestrator | 71b85a1523a8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-1 2026-04-16 10:14:46.627798 | orchestrator | 2ad110912802 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-1 2026-04-16 10:14:46.627809 | orchestrator | a5915ca5cb90 registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-16 10:14:46.627820 | orchestrator | c73a6eeeb22d registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-16 10:14:46.627831 | orchestrator | 618b6dcdcb51 registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-16 10:14:46.627849 | orchestrator | 29dc6a18e214 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-16 10:14:46.627868 | orchestrator | 1c6f01758d5f registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-16 10:14:46.627879 | orchestrator | cde02e1f115d registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-16 10:14:46.627890 | orchestrator | 471c5c374fab registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-16 10:14:46.627900 | orchestrator | c8834d96b395 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-16 10:14:46.627911 | orchestrator | cfefc565e23a registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-16 10:14:46.627922 | orchestrator | e59f9c451680 registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-16 10:14:46.627933 | orchestrator | 8c7a201a4ff5 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-16 10:14:46.627944 | orchestrator | 749513d3d1de registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-16 10:14:46.627955 | orchestrator | f1fcde958a33 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-16 10:14:46.627966 | orchestrator | e9857b4e2338 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-16 10:14:46.627977 | orchestrator | 7975b29c0d2b registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-16 10:14:46.627988 | orchestrator | 9badba08435c registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-16 10:14:46.627998 | orchestrator | ff02471ffc4a registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-16 10:14:46.628009 | orchestrator | 538ba7628ab5 registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-16 10:14:46.628020 | orchestrator | 6c4bbdd78498 registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-16 10:14:46.628031 | orchestrator | 482edaa45e56 registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-16 10:14:46.756933 | orchestrator | 2026-04-16 10:14:46.757036 | orchestrator | ## Images @ testbed-node-1 2026-04-16 10:14:46.757053 | orchestrator | 2026-04-16 10:14:46.757065 | orchestrator | + echo 2026-04-16 10:14:46.757077 | orchestrator | + echo '## Images @ testbed-node-1' 2026-04-16 10:14:46.757127 | orchestrator | + echo 2026-04-16 10:14:46.757141 | orchestrator | + osism container testbed-node-1 images 2026-04-16 10:14:48.294391 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 10:14:48.295571 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 2 weeks ago 288MB 2026-04-16 10:14:48.295629 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 2 weeks ago 1.54GB 2026-04-16 10:14:48.295649 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 2 weeks ago 1.57GB 2026-04-16 10:14:48.295666 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-16 10:14:48.295685 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 2 weeks ago 277MB 2026-04-16 10:14:48.295704 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 2 weeks ago 1.04GB 2026-04-16 10:14:48.295731 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 2 weeks ago 350MB 2026-04-16 10:14:48.295750 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 2 weeks ago 427MB 2026-04-16 10:14:48.295770 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-16 10:14:48.295790 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-16 10:14:48.295810 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 2 weeks ago 285MB 2026-04-16 10:14:48.295829 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 2 weeks ago 293MB 2026-04-16 10:14:48.295848 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 2 weeks ago 293MB 2026-04-16 10:14:48.295868 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 2 weeks ago 284MB 2026-04-16 10:14:48.295888 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 2 weeks ago 284MB 2026-04-16 10:14:48.295908 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 2 weeks ago 1.2GB 2026-04-16 10:14:48.295928 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 2 weeks ago 463MB 2026-04-16 10:14:48.295947 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 2 weeks ago 309MB 2026-04-16 10:14:48.295965 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-16 10:14:48.295982 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 2 weeks ago 303MB 2026-04-16 10:14:48.296002 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 2 weeks ago 312MB 2026-04-16 10:14:48.296022 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-16 10:14:48.296042 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 2 weeks ago 301MB 2026-04-16 10:14:48.296061 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 2 weeks ago 301MB 2026-04-16 10:14:48.296109 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 2 weeks ago 301MB 2026-04-16 10:14:48.296122 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 2 weeks ago 301MB 2026-04-16 10:14:48.296133 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 2 weeks ago 1.09GB 2026-04-16 10:14:48.296144 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 2 weeks ago 1.06GB 2026-04-16 10:14:48.296155 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 2 weeks ago 1.05GB 2026-04-16 10:14:48.296192 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 2 weeks ago 997MB 2026-04-16 10:14:48.296204 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 2 weeks ago 996MB 2026-04-16 10:14:48.296215 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 2 weeks ago 1.07GB 2026-04-16 10:14:48.296226 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 2 weeks ago 1.07GB 2026-04-16 10:14:48.296237 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 2 weeks ago 1.05GB 2026-04-16 10:14:48.296248 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 2 weeks ago 1.05GB 2026-04-16 10:14:48.296260 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 2 weeks ago 1.05GB 2026-04-16 10:14:48.296279 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 2 weeks ago 996MB 2026-04-16 10:14:48.296294 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 2 weeks ago 995MB 2026-04-16 10:14:48.296315 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 2 weeks ago 995MB 2026-04-16 10:14:48.296342 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 2 weeks ago 995MB 2026-04-16 10:14:48.296359 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 2 weeks ago 994MB 2026-04-16 10:14:48.296376 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 2 weeks ago 1.12GB 2026-04-16 10:14:48.296394 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 2 weeks ago 1.79GB 2026-04-16 10:14:48.296411 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 2 weeks ago 1.43GB 2026-04-16 10:14:48.296429 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 2 weeks ago 1.43GB 2026-04-16 10:14:48.296447 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 2 weeks ago 1.44GB 2026-04-16 10:14:48.296465 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 2 weeks ago 1.24GB 2026-04-16 10:14:48.296483 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 2 weeks ago 1.07GB 2026-04-16 10:14:48.296500 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 2 weeks ago 1.02GB 2026-04-16 10:14:48.296565 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 2 weeks ago 1GB 2026-04-16 10:14:48.296585 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 2 weeks ago 1GB 2026-04-16 10:14:48.296604 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 2 weeks ago 1GB 2026-04-16 10:14:48.296622 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 2 weeks ago 1.27GB 2026-04-16 10:14:48.296641 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 2 weeks ago 1.15GB 2026-04-16 10:14:48.296659 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 2 weeks ago 1.01GB 2026-04-16 10:14:48.296678 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 2 weeks ago 1GB 2026-04-16 10:14:48.296696 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 2 weeks ago 1GB 2026-04-16 10:14:48.296715 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 2 weeks ago 1.01GB 2026-04-16 10:14:48.296734 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 2 weeks ago 1GB 2026-04-16 10:14:48.296752 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 2 weeks ago 1GB 2026-04-16 10:14:48.296788 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 2 weeks ago 1.23GB 2026-04-16 10:14:48.296808 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 2 weeks ago 1.39GB 2026-04-16 10:14:48.296838 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 2 weeks ago 1.23GB 2026-04-16 10:14:48.296858 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 2 weeks ago 1.23GB 2026-04-16 10:14:48.296878 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 2 weeks ago 1.07GB 2026-04-16 10:14:48.296897 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 2 weeks ago 1.07GB 2026-04-16 10:14:48.296915 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 2 weeks ago 1.07GB 2026-04-16 10:14:48.296940 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 2 weeks ago 1.24GB 2026-04-16 10:14:48.296961 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 2 weeks ago 301MB 2026-04-16 10:14:48.296980 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-16 10:14:48.296998 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-16 10:14:48.297017 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-16 10:14:48.297036 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-16 10:14:48.297054 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-16 10:14:48.297086 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 10:14:48.297105 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 10:14:48.297124 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-16 10:14:48.297143 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-16 10:14:48.297163 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-16 10:14:48.297181 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 10:14:48.297199 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-16 10:14:48.297226 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-16 10:14:48.297245 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-16 10:14:48.297265 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-16 10:14:48.297283 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-16 10:14:48.297301 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-16 10:14:48.297319 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 10:14:48.297338 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-16 10:14:48.297358 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 10:14:48.297375 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-16 10:14:48.297405 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-16 10:14:48.297425 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-16 10:14:48.297443 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-16 10:14:48.297461 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-16 10:14:48.297480 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-16 10:14:48.297498 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-16 10:14:48.297553 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-16 10:14:48.297572 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-16 10:14:48.297600 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-16 10:14:48.297639 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-16 10:14:48.297658 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-16 10:14:48.297678 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-16 10:14:48.297696 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-16 10:14:48.297714 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-16 10:14:48.297733 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-16 10:14:48.297751 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-16 10:14:48.297771 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-16 10:14:48.297788 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-16 10:14:48.297806 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-16 10:14:48.297825 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-16 10:14:48.297844 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-16 10:14:48.297862 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-16 10:14:48.297881 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-16 10:14:48.297897 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-16 10:14:48.297914 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-16 10:14:48.297931 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-16 10:14:48.297949 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-16 10:14:48.297966 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-16 10:14:48.297982 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-16 10:14:48.298000 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-16 10:14:48.298088 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-16 10:14:48.298104 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-16 10:14:48.298115 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-16 10:14:48.298126 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-16 10:14:48.298149 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-16 10:14:48.298160 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-16 10:14:48.298171 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-16 10:14:48.298182 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-16 10:14:48.298194 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-16 10:14:48.298205 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-16 10:14:48.298216 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-16 10:14:48.298234 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-16 10:14:48.298246 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-16 10:14:48.298257 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-16 10:14:48.298268 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-16 10:14:48.298279 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-16 10:14:48.298290 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-16 10:14:48.298302 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-16 10:14:48.430210 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-04-16 10:14:48.430772 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-16 10:14:48.488214 | orchestrator | 2026-04-16 10:14:48.488307 | orchestrator | ## Containers @ testbed-node-2 2026-04-16 10:14:48.488322 | orchestrator | 2026-04-16 10:14:48.488334 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 10:14:48.488345 | orchestrator | + echo 2026-04-16 10:14:48.488356 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-04-16 10:14:48.488368 | orchestrator | + echo 2026-04-16 10:14:48.488380 | orchestrator | + osism container testbed-node-2 ps 2026-04-16 10:14:49.994378 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-04-16 10:14:49.994474 | orchestrator | 839f3e944355 registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.2.20260328 "dumb-init --single-…" 17 seconds ago Up 16 seconds (health: starting) magnum_conductor 2026-04-16 10:14:49.994492 | orchestrator | c46b55949750 registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.2.20260328 "dumb-init --single-…" 39 seconds ago Up 38 seconds (healthy) magnum_api 2026-04-16 10:14:49.994504 | orchestrator | aa66012547d7 registry.osism.tech/kolla/release/2025.1/grafana:12.4.2.20260328 "dumb-init --single-…" 3 minutes ago Up 3 minutes grafana 2026-04-16 10:14:49.994586 | orchestrator | c6d3488a055e registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_elasticsearch_exporter 2026-04-16 10:14:49.994624 | orchestrator | 43d09aeac2d5 registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_cadvisor 2026-04-16 10:14:49.994636 | orchestrator | 5d5dc377848f registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20260328 "dumb-init --single-…" 6 minutes ago Up 6 minutes prometheus_memcached_exporter 2026-04-16 10:14:49.994648 | orchestrator | b829d29964fd registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_mysqld_exporter 2026-04-16 10:14:49.994659 | orchestrator | e819cc31a1d8 registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328 "dumb-init --single-…" 7 minutes ago Up 7 minutes prometheus_node_exporter 2026-04-16 10:14:49.994670 | orchestrator | 341ca5570c4e registry.osism.tech/kolla/release/2025.1/manila-share:20.0.2.20260328 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) manila_share 2026-04-16 10:14:49.994681 | orchestrator | 50c7b97d25b9 registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_scheduler 2026-04-16 10:14:49.994691 | orchestrator | 07d8424b4d9b registry.osism.tech/kolla/release/2025.1/manila-data:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_data 2026-04-16 10:14:49.994717 | orchestrator | 258be704a88d registry.osism.tech/kolla/release/2025.1/manila-api:20.0.2.20260328 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) manila_api 2026-04-16 10:14:49.994729 | orchestrator | d3b4be60a35d registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_worker 2026-04-16 10:14:49.994740 | orchestrator | 45f31992c69a registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_housekeeping 2026-04-16 10:14:49.994751 | orchestrator | f1dd9f2405e0 registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) octavia_health_manager 2026-04-16 10:14:49.994762 | orchestrator | 32b111dd94f6 registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.2.20260328 "dumb-init --single-…" 15 minutes ago Up 15 minutes octavia_driver_agent 2026-04-16 10:14:49.994773 | orchestrator | 8635b5bcf7ee registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.2.20260328 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) octavia_api 2026-04-16 10:14:49.994801 | orchestrator | 795b9035cfeb registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_notifier 2026-04-16 10:14:49.994814 | orchestrator | 638b50b28cfd registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_listener 2026-04-16 10:14:49.994826 | orchestrator | 34f4be24ea52 registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20260328 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) aodh_evaluator 2026-04-16 10:14:49.994837 | orchestrator | 61c72f842487 registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20260328 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) aodh_api 2026-04-16 10:14:49.994855 | orchestrator | 436f897d1106 registry.osism.tech/kolla/release/2025.1/ceilometer-central:24.0.1.20260328 "dumb-init --single-…" 21 minutes ago Up 21 minutes ceilometer_central 2026-04-16 10:14:49.994866 | orchestrator | 42a7ff5b0186 registry.osism.tech/kolla/release/2025.1/ceilometer-notification:24.0.1.20260328 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) ceilometer_notification 2026-04-16 10:14:49.994877 | orchestrator | a608838c4026 registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_worker 2026-04-16 10:14:49.994888 | orchestrator | 74aec034d48e registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20260328 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) designate_mdns 2026-04-16 10:14:49.994898 | orchestrator | 1266f2fee472 registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20260328 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) designate_producer 2026-04-16 10:14:49.994910 | orchestrator | e903ce04c9bb registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_central 2026-04-16 10:14:49.994923 | orchestrator | b2cab0694ff4 registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_api 2026-04-16 10:14:49.994935 | orchestrator | 6331d66f1de0 registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20260328 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) designate_backend_bind9 2026-04-16 10:14:49.994962 | orchestrator | a2fc204324c5 registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_worker 2026-04-16 10:14:49.994984 | orchestrator | 8b322c310b8f registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_keystone_listener 2026-04-16 10:14:49.994997 | orchestrator | 6b66f74f88db registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20260328 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) barbican_api 2026-04-16 10:14:49.995009 | orchestrator | c26d0bcad712 registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20260328 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_backup 2026-04-16 10:14:49.995024 | orchestrator | 7e2b5390a855 registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20260328 "dumb-init --single-…" 32 minutes ago Up 31 minutes (healthy) cinder_volume 2026-04-16 10:14:49.995043 | orchestrator | 721b1476dc8d registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20260328 "dumb-init --single-…" 33 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-04-16 10:14:49.995063 | orchestrator | 0dcc20203802 registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20260328 "dumb-init --single-…" 33 minutes ago Up 31 minutes (healthy) cinder_api 2026-04-16 10:14:49.995101 | orchestrator | c748a21903c9 registry.osism.tech/kolla/release/2025.1/glance-api:30.1.1.20260328 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) glance_api 2026-04-16 10:14:49.995124 | orchestrator | 2d86b47e19a6 registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20260328 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) skyline_console 2026-04-16 10:14:49.995155 | orchestrator | a40dc0c3ad0e registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20260328 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) skyline_apiserver 2026-04-16 10:14:49.995173 | orchestrator | 18ba8bc68901 registry.osism.tech/kolla/release/2025.1/horizon:25.3.3.20260328 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) horizon 2026-04-16 10:14:49.995194 | orchestrator | 6d72bb8f781e registry.osism.tech/kolla/release/2025.1/nova-novncproxy:31.2.1.20260328 "dumb-init --single-…" 55 minutes ago Up 46 minutes (healthy) nova_novncproxy 2026-04-16 10:14:49.995212 | orchestrator | 68adae97e48b registry.osism.tech/kolla/release/2025.1/nova-conductor:31.2.1.20260328 "dumb-init --single-…" 56 minutes ago Up 46 minutes (healthy) nova_conductor 2026-04-16 10:14:49.996497 | orchestrator | f30796d0fed8 registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 58 minutes (healthy) nova_metadata 2026-04-16 10:14:49.996572 | orchestrator | a9f5646cc8fd registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20260328 "dumb-init --single-…" 58 minutes ago Up 45 minutes (healthy) nova_api 2026-04-16 10:14:49.996585 | orchestrator | 7473e95b9fcc registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20260328 "dumb-init --single-…" 59 minutes ago Up 46 minutes (healthy) nova_scheduler 2026-04-16 10:14:49.996595 | orchestrator | 6341825f1faf registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) neutron_server 2026-04-16 10:14:49.996605 | orchestrator | 822bbd906a58 registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) placement_api 2026-04-16 10:14:49.996614 | orchestrator | 1c5c8daf6ae0 registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone 2026-04-16 10:14:49.996624 | orchestrator | aaf1b209f9c0 registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_fernet 2026-04-16 10:14:49.996638 | orchestrator | 3fbea6d8d833 registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20260328 "dumb-init --single-…" About an hour ago Up About an hour (healthy) keystone_ssh 2026-04-16 10:14:49.996648 | orchestrator | 175011a67e5c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-04-16 10:14:49.996658 | orchestrator | 5bb8eb34253d registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 2 hours ago Up 2 hours ceph-mgr-testbed-node-2 2026-04-16 10:14:49.996668 | orchestrator | 6b24f5cd3734 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 2 hours ago Up 2 hours ceph-mon-testbed-node-2 2026-04-16 10:14:49.996677 | orchestrator | e49e4fbdfbde registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_northd 2026-04-16 10:14:49.996687 | orchestrator | c01cbfac2cea registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db_relay_1 2026-04-16 10:14:49.996724 | orchestrator | e53990987d8e registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20260328 "dumb-init --single-…" 2 hours ago Up 2 hours ovn_sb_db 2026-04-16 10:14:49.996734 | orchestrator | 5a158fdcf4e8 registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_nb_db 2026-04-16 10:14:49.996744 | orchestrator | ec1c2bfd625d registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours ovn_controller 2026-04-16 10:14:49.996753 | orchestrator | afc23448ae14 registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_vswitchd 2026-04-16 10:14:49.996763 | orchestrator | 8381f50081df registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) openvswitch_db 2026-04-16 10:14:49.996772 | orchestrator | 6bca4a882940 registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) rabbitmq 2026-04-16 10:14:49.996794 | orchestrator | 091333ae5f31 registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.16.20260328 "dumb-init -- kolla_…" 3 hours ago Up 3 hours (healthy) mariadb 2026-04-16 10:14:49.996804 | orchestrator | d65926bc207d registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis_sentinel 2026-04-16 10:14:49.996814 | orchestrator | 08941b763f08 registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) redis 2026-04-16 10:14:49.996823 | orchestrator | 2ec8a029cb5b registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) memcached 2026-04-16 10:14:49.996833 | orchestrator | d75f444df5c6 registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch_dashboards 2026-04-16 10:14:49.996843 | orchestrator | ee874371b870 registry.osism.tech/kolla/release/2025.1/opensearch:2.19.5.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) opensearch 2026-04-16 10:14:49.996852 | orchestrator | 4ab650678aba registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours keepalived 2026-04-16 10:14:49.996866 | orchestrator | b3ef62bb1702 registry.osism.tech/kolla/release/2025.1/proxysql:3.0.6.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) proxysql 2026-04-16 10:14:49.996876 | orchestrator | e038a0700026 registry.osism.tech/kolla/release/2025.1/haproxy:2.8.16.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours (healthy) haproxy 2026-04-16 10:14:49.996886 | orchestrator | 1a7e1ff4418e registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours cron 2026-04-16 10:14:49.996896 | orchestrator | b9a0f0a4242d registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours kolla_toolbox 2026-04-16 10:14:49.996913 | orchestrator | 45137524a73d registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328 "dumb-init --single-…" 3 hours ago Up 3 hours fluentd 2026-04-16 10:14:50.131167 | orchestrator | 2026-04-16 10:14:50.131264 | orchestrator | ## Images @ testbed-node-2 2026-04-16 10:14:50.131278 | orchestrator | 2026-04-16 10:14:50.131289 | orchestrator | + echo 2026-04-16 10:14:50.131299 | orchestrator | + echo '## Images @ testbed-node-2' 2026-04-16 10:14:50.131309 | orchestrator | + echo 2026-04-16 10:14:50.131318 | orchestrator | + osism container testbed-node-2 images 2026-04-16 10:14:51.663110 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-04-16 10:14:51.663178 | orchestrator | registry.osism.tech/kolla/release/2025.1/keepalived 2.2.8.20260328 cc29bd9a85e4 2 weeks ago 288MB 2026-04-16 10:14:51.663185 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch-dashboards 2.19.5.20260328 f834ead10f11 2 weeks ago 1.54GB 2026-04-16 10:14:51.663190 | orchestrator | registry.osism.tech/kolla/release/2025.1/opensearch 2.19.5.20260328 d36ae5f707fb 2 weeks ago 1.57GB 2026-04-16 10:14:51.663194 | orchestrator | registry.osism.tech/kolla/release/2025.1/fluentd 5.0.9.20260328 e1596a0c11a4 2 weeks ago 590MB 2026-04-16 10:14:51.663199 | orchestrator | registry.osism.tech/kolla/release/2025.1/memcached 1.6.24.20260328 09b41eff0fc1 2 weeks ago 277MB 2026-04-16 10:14:51.663203 | orchestrator | registry.osism.tech/kolla/release/2025.1/grafana 12.4.2.20260328 3842b7ef2d0c 2 weeks ago 1.04GB 2026-04-16 10:14:51.663207 | orchestrator | registry.osism.tech/kolla/release/2025.1/rabbitmq 4.1.8.20260328 c6408fdc6cf4 2 weeks ago 350MB 2026-04-16 10:14:51.663211 | orchestrator | registry.osism.tech/kolla/release/2025.1/proxysql 3.0.6.20260328 ccffdf9574f0 2 weeks ago 427MB 2026-04-16 10:14:51.663216 | orchestrator | registry.osism.tech/kolla/release/2025.1/kolla-toolbox 20.3.1.20260328 28c0d33bbf93 2 weeks ago 683MB 2026-04-16 10:14:51.663220 | orchestrator | registry.osism.tech/kolla/release/2025.1/cron 3.0.20260328 83ceba86723e 2 weeks ago 277MB 2026-04-16 10:14:51.663224 | orchestrator | registry.osism.tech/kolla/release/2025.1/haproxy 2.8.16.20260328 cf24d3343dd6 2 weeks ago 285MB 2026-04-16 10:14:51.663228 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-db-server 3.5.1.20260328 2df964b9b6ef 2 weeks ago 293MB 2026-04-16 10:14:51.663233 | orchestrator | registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd 3.5.1.20260328 d56dc4fd4981 2 weeks ago 293MB 2026-04-16 10:14:51.663237 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis-sentinel 7.0.15.20260328 c513d0722dfc 2 weeks ago 284MB 2026-04-16 10:14:51.663241 | orchestrator | registry.osism.tech/kolla/release/2025.1/redis 7.0.15.20260328 0640729e8c26 2 weeks ago 284MB 2026-04-16 10:14:51.663246 | orchestrator | registry.osism.tech/kolla/release/2025.1/horizon 25.3.3.20260328 ee0ad6e2185e 2 weeks ago 1.2GB 2026-04-16 10:14:51.663250 | orchestrator | registry.osism.tech/kolla/release/2025.1/mariadb-server 10.11.16.20260328 886dcd3e3f53 2 weeks ago 463MB 2026-04-16 10:14:51.663254 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter 0.15.0.20260328 995036f125d2 2 weeks ago 309MB 2026-04-16 10:14:51.663258 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor 0.49.2.20260328 f7140e8a13d8 2 weeks ago 368MB 2026-04-16 10:14:51.663262 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter 1.8.0.20260328 c9ee75870dff 2 weeks ago 303MB 2026-04-16 10:14:51.663281 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter 0.16.0.20260328 117acc95a5ad 2 weeks ago 312MB 2026-04-16 10:14:51.663286 | orchestrator | registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter 1.8.2.20260328 4d11b36c2bda 2 weeks ago 317MB 2026-04-16 10:14:51.663291 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server 25.3.1.20260328 859fd9ce89d9 2 weeks ago 301MB 2026-04-16 10:14:51.663295 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server 25.3.1.20260328 fb0f3707730d 2 weeks ago 301MB 2026-04-16 10:14:51.663300 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-controller 25.3.1.20260328 3228ba87088e 2 weeks ago 301MB 2026-04-16 10:14:51.663304 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-northd 25.3.1.20260328 65c0953e4c39 2 weeks ago 301MB 2026-04-16 10:14:51.663308 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone 27.0.1.20260328 b31ea490ee2a 2 weeks ago 1.09GB 2026-04-16 10:14:51.663312 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-ssh 27.0.1.20260328 40f5d9a677d1 2 weeks ago 1.06GB 2026-04-16 10:14:51.663316 | orchestrator | registry.osism.tech/kolla/release/2025.1/keystone-fernet 27.0.1.20260328 f133afc9d53b 2 weeks ago 1.05GB 2026-04-16 10:14:51.663330 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-central 24.0.1.20260328 d407dd61fee1 2 weeks ago 997MB 2026-04-16 10:14:51.663335 | orchestrator | registry.osism.tech/kolla/release/2025.1/ceilometer-notification 24.0.1.20260328 a0d400ce4fdd 2 weeks ago 996MB 2026-04-16 10:14:51.663339 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-driver-agent 16.0.2.20260328 f07869d78758 2 weeks ago 1.07GB 2026-04-16 10:14:51.663343 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-api 16.0.2.20260328 7118289a0d17 2 weeks ago 1.07GB 2026-04-16 10:14:51.663347 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-worker 16.0.2.20260328 1065bc696018 2 weeks ago 1.05GB 2026-04-16 10:14:51.663351 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-health-manager 16.0.2.20260328 0adbcb202c49 2 weeks ago 1.05GB 2026-04-16 10:14:51.663356 | orchestrator | registry.osism.tech/kolla/release/2025.1/octavia-housekeeping 16.0.2.20260328 1e4a4601f94f 2 weeks ago 1.05GB 2026-04-16 10:14:51.663360 | orchestrator | registry.osism.tech/kolla/release/2025.1/placement-api 13.0.0.20260328 b52f42ecbb4d 2 weeks ago 996MB 2026-04-16 10:14:51.663364 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-listener 20.0.0.20260328 afbc43250d60 2 weeks ago 995MB 2026-04-16 10:14:51.663379 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-evaluator 20.0.0.20260328 26d81adaeaae 2 weeks ago 995MB 2026-04-16 10:14:51.663383 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-notifier 20.0.0.20260328 aa74bb4c136d 2 weeks ago 995MB 2026-04-16 10:14:51.663387 | orchestrator | registry.osism.tech/kolla/release/2025.1/aodh-api 20.0.0.20260328 bb920611ad39 2 weeks ago 994MB 2026-04-16 10:14:51.663391 | orchestrator | registry.osism.tech/kolla/release/2025.1/glance-api 30.1.1.20260328 525bb863082d 2 weeks ago 1.12GB 2026-04-16 10:14:51.663396 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-volume 26.2.1.20260328 78cc3d4efb57 2 weeks ago 1.79GB 2026-04-16 10:14:51.663400 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-scheduler 26.2.1.20260328 b72d2e7568f8 2 weeks ago 1.43GB 2026-04-16 10:14:51.663404 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-api 26.2.1.20260328 2583a0d99734 2 weeks ago 1.43GB 2026-04-16 10:14:51.663414 | orchestrator | registry.osism.tech/kolla/release/2025.1/cinder-backup 26.2.1.20260328 a970df3ae580 2 weeks ago 1.44GB 2026-04-16 10:14:51.663418 | orchestrator | registry.osism.tech/kolla/release/2025.1/neutron-server 26.0.3.20260328 b084449c71f7 2 weeks ago 1.24GB 2026-04-16 10:14:51.663422 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-console 6.0.1.20260328 cf9981ab1a70 2 weeks ago 1.07GB 2026-04-16 10:14:51.663426 | orchestrator | registry.osism.tech/kolla/release/2025.1/skyline-apiserver 6.0.1.20260328 d52b28f7bdf2 2 weeks ago 1.02GB 2026-04-16 10:14:51.663430 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-worker 20.0.1.20260328 10c316f8a88d 2 weeks ago 1GB 2026-04-16 10:14:51.663435 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener 20.0.1.20260328 f1c21f7912dc 2 weeks ago 1GB 2026-04-16 10:14:51.663439 | orchestrator | registry.osism.tech/kolla/release/2025.1/barbican-api 20.0.1.20260328 43f0933a84ab 2 weeks ago 1GB 2026-04-16 10:14:51.663446 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-conductor 20.0.2.20260328 8cf236db44c6 2 weeks ago 1.27GB 2026-04-16 10:14:51.663451 | orchestrator | registry.osism.tech/kolla/release/2025.1/magnum-api 20.0.2.20260328 9a37ca6883b8 2 weeks ago 1.15GB 2026-04-16 10:14:51.663455 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-backend-bind9 20.0.1.20260328 bc68ee83deb0 2 weeks ago 1.01GB 2026-04-16 10:14:51.663459 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-api 20.0.1.20260328 c0c239664d22 2 weeks ago 1GB 2026-04-16 10:14:51.663463 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-mdns 20.0.1.20260328 c268b1854421 2 weeks ago 1GB 2026-04-16 10:14:51.663468 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-worker 20.0.1.20260328 3ce3202d2f8d 2 weeks ago 1.01GB 2026-04-16 10:14:51.663472 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-central 20.0.1.20260328 50fabfae16b4 2 weeks ago 1GB 2026-04-16 10:14:51.663476 | orchestrator | registry.osism.tech/kolla/release/2025.1/designate-producer 20.0.1.20260328 23baf4bae3a6 2 weeks ago 1GB 2026-04-16 10:14:51.663484 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-api 31.2.1.20260328 7100cf172da2 2 weeks ago 1.23GB 2026-04-16 10:14:51.663488 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-novncproxy 31.2.1.20260328 003749dfd921 2 weeks ago 1.39GB 2026-04-16 10:14:51.663492 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-scheduler 31.2.1.20260328 0b8714cecfd8 2 weeks ago 1.23GB 2026-04-16 10:14:51.663496 | orchestrator | registry.osism.tech/kolla/release/2025.1/nova-conductor 31.2.1.20260328 d35210169004 2 weeks ago 1.23GB 2026-04-16 10:14:51.663501 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-data 20.0.2.20260328 5c1ce4fd1849 2 weeks ago 1.07GB 2026-04-16 10:14:51.663505 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-scheduler 20.0.2.20260328 29e4081372f9 2 weeks ago 1.07GB 2026-04-16 10:14:51.663509 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-api 20.0.2.20260328 949d0dfdab5b 2 weeks ago 1.07GB 2026-04-16 10:14:51.663513 | orchestrator | registry.osism.tech/kolla/release/2025.1/manila-share 20.0.2.20260328 d5693cb24e6d 2 weeks ago 1.24GB 2026-04-16 10:14:51.663517 | orchestrator | registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay 25.3.1.20260328 08ae9a102f53 2 weeks ago 301MB 2026-04-16 10:14:51.663554 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 4 months ago 322MB 2026-04-16 10:14:51.663562 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 4 months ago 266MB 2026-04-16 10:14:51.663566 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 4 months ago 1.56GB 2026-04-16 10:14:51.663570 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 4 months ago 276MB 2026-04-16 10:14:51.663575 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 4 months ago 1.53GB 2026-04-16 10:14:51.663579 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 4 months ago 669MB 2026-04-16 10:14:51.663583 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 4 months ago 265MB 2026-04-16 10:14:51.663614 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 4 months ago 1.02GB 2026-04-16 10:14:51.663619 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 4 months ago 412MB 2026-04-16 10:14:51.663623 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 4 months ago 274MB 2026-04-16 10:14:51.663627 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 4 months ago 578MB 2026-04-16 10:14:51.663631 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 4 months ago 273MB 2026-04-16 10:14:51.663635 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 4 months ago 273MB 2026-04-16 10:14:51.663642 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 4 months ago 452MB 2026-04-16 10:14:51.663646 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 4 months ago 1.15GB 2026-04-16 10:14:51.663651 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 4 months ago 301MB 2026-04-16 10:14:51.663655 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 4 months ago 298MB 2026-04-16 10:14:51.663709 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 4 months ago 357MB 2026-04-16 10:14:51.663880 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 4 months ago 292MB 2026-04-16 10:14:51.663889 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 4 months ago 305MB 2026-04-16 10:14:51.663894 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 4 months ago 279MB 2026-04-16 10:14:51.663899 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 4 months ago 975MB 2026-04-16 10:14:51.663904 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 4 months ago 279MB 2026-04-16 10:14:51.663909 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 4 months ago 1.37GB 2026-04-16 10:14:51.663914 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 4 months ago 1.21GB 2026-04-16 10:14:51.663919 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 4 months ago 1.21GB 2026-04-16 10:14:51.663929 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 4 months ago 1.21GB 2026-04-16 10:14:51.663934 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 4 months ago 976MB 2026-04-16 10:14:51.663938 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 4 months ago 976MB 2026-04-16 10:14:51.663943 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 4 months ago 1.13GB 2026-04-16 10:14:51.663948 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 4 months ago 1.24GB 2026-04-16 10:14:51.663953 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 4 months ago 1.22GB 2026-04-16 10:14:51.663958 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 4 months ago 1.06GB 2026-04-16 10:14:51.663962 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 4 months ago 1.05GB 2026-04-16 10:14:51.663967 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 4 months ago 1.05GB 2026-04-16 10:14:51.663972 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 4 months ago 974MB 2026-04-16 10:14:51.663977 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 4 months ago 974MB 2026-04-16 10:14:51.663982 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 4 months ago 974MB 2026-04-16 10:14:51.663986 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 4 months ago 973MB 2026-04-16 10:14:51.663991 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 4 months ago 991MB 2026-04-16 10:14:51.663996 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 4 months ago 991MB 2026-04-16 10:14:51.664001 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 4 months ago 990MB 2026-04-16 10:14:51.664005 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 4 months ago 1.09GB 2026-04-16 10:14:51.664010 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 4 months ago 1.04GB 2026-04-16 10:14:51.664015 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 4 months ago 1.04GB 2026-04-16 10:14:51.664020 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 4 months ago 1.03GB 2026-04-16 10:14:51.664025 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 4 months ago 1.03GB 2026-04-16 10:14:51.664030 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 4 months ago 1.05GB 2026-04-16 10:14:51.664035 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 4 months ago 1.03GB 2026-04-16 10:14:51.664043 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 4 months ago 1.05GB 2026-04-16 10:14:51.664048 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 4 months ago 1.16GB 2026-04-16 10:14:51.664056 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 4 months ago 1.1GB 2026-04-16 10:14:51.664060 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 4 months ago 983MB 2026-04-16 10:14:51.664065 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 4 months ago 989MB 2026-04-16 10:14:51.664069 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 4 months ago 984MB 2026-04-16 10:14:51.664073 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 4 months ago 984MB 2026-04-16 10:14:51.664077 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 4 months ago 989MB 2026-04-16 10:14:51.664081 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 4 months ago 984MB 2026-04-16 10:14:51.664086 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 4 months ago 1.05GB 2026-04-16 10:14:51.664090 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 4 months ago 990MB 2026-04-16 10:14:51.664094 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 4 months ago 1.72GB 2026-04-16 10:14:51.664098 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 4 months ago 1.4GB 2026-04-16 10:14:51.664103 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 4 months ago 1.41GB 2026-04-16 10:14:51.664107 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 4 months ago 1.4GB 2026-04-16 10:14:51.664111 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 4 months ago 840MB 2026-04-16 10:14:51.664115 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 4 months ago 840MB 2026-04-16 10:14:51.664120 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 4 months ago 840MB 2026-04-16 10:14:51.664124 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 4 months ago 840MB 2026-04-16 10:14:51.664152 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 11 months ago 1.27GB 2026-04-16 10:14:51.800673 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-04-16 10:14:51.810321 | orchestrator | + set -e 2026-04-16 10:14:51.810431 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 10:14:51.810452 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 10:14:51.810472 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 10:14:51.810490 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 10:14:51.810517 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 10:14:51.810568 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 10:14:51.810588 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 10:14:51.810605 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 10:14:51.810624 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 10:14:51.810642 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 10:14:51.810660 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 10:14:51.810678 | orchestrator | ++ export ARA=false 2026-04-16 10:14:51.810695 | orchestrator | ++ ARA=false 2026-04-16 10:14:51.810713 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 10:14:51.810732 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 10:14:51.810749 | orchestrator | ++ export TEMPEST=false 2026-04-16 10:14:51.810768 | orchestrator | ++ TEMPEST=false 2026-04-16 10:14:51.810787 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 10:14:51.810805 | orchestrator | ++ IS_ZUUL=true 2026-04-16 10:14:51.810856 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 10:14:51.810869 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 10:14:51.810880 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 10:14:51.810894 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 10:14:51.810913 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 10:14:51.810930 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 10:14:51.810966 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 10:14:51.810986 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 10:14:51.811006 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 10:14:51.811025 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 10:14:51.811044 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-16 10:14:51.811057 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-16 10:14:51.811068 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-04-16 10:14:51.811079 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-04-16 10:14:51.818293 | orchestrator | + set -e 2026-04-16 10:14:51.819087 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 10:14:51.819124 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 10:14:51.819197 | orchestrator | ++ INTERACTIVE=false 2026-04-16 10:14:51.819205 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 10:14:51.819211 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 10:14:51.819218 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 10:14:51.819232 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 10:14:51.823175 | orchestrator | 2026-04-16 10:14:51.823221 | orchestrator | # Ceph status 2026-04-16 10:14:51.823234 | orchestrator | 2026-04-16 10:14:51.823245 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-16 10:14:51.823256 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-16 10:14:51.823267 | orchestrator | + echo 2026-04-16 10:14:51.823278 | orchestrator | + echo '# Ceph status' 2026-04-16 10:14:51.823289 | orchestrator | + echo 2026-04-16 10:14:51.823300 | orchestrator | + ceph -s 2026-04-16 10:14:52.442792 | orchestrator | cluster: 2026-04-16 10:14:52.442883 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-04-16 10:14:52.442897 | orchestrator | health: HEALTH_OK 2026-04-16 10:14:52.442907 | orchestrator | 2026-04-16 10:14:52.442916 | orchestrator | services: 2026-04-16 10:14:52.442926 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 2h) 2026-04-16 10:14:52.442936 | orchestrator | mgr: testbed-node-0(active, since 119m), standbys: testbed-node-1, testbed-node-2 2026-04-16 10:14:52.442945 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-04-16 10:14:52.442954 | orchestrator | osd: 6 osds: 6 up (since 102m), 6 in (since 4h) 2026-04-16 10:14:52.442963 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-04-16 10:14:52.442972 | orchestrator | 2026-04-16 10:14:52.442980 | orchestrator | data: 2026-04-16 10:14:52.442989 | orchestrator | volumes: 1/1 healthy 2026-04-16 10:14:52.442998 | orchestrator | pools: 14 pools, 401 pgs 2026-04-16 10:14:52.443006 | orchestrator | objects: 821 objects, 2.8 GiB 2026-04-16 10:14:52.443015 | orchestrator | usage: 7.9 GiB used, 112 GiB / 120 GiB avail 2026-04-16 10:14:52.443024 | orchestrator | pgs: 401 active+clean 2026-04-16 10:14:52.443032 | orchestrator | 2026-04-16 10:14:52.443041 | orchestrator | io: 2026-04-16 10:14:52.443049 | orchestrator | client: 1022 B/s rd, 0 op/s rd, 0 op/s wr 2026-04-16 10:14:52.443058 | orchestrator | 2026-04-16 10:14:52.492142 | orchestrator | 2026-04-16 10:14:52.492223 | orchestrator | # Ceph versions 2026-04-16 10:14:52.492232 | orchestrator | 2026-04-16 10:14:52.492240 | orchestrator | + echo 2026-04-16 10:14:52.492246 | orchestrator | + echo '# Ceph versions' 2026-04-16 10:14:52.492254 | orchestrator | + echo 2026-04-16 10:14:52.492261 | orchestrator | + ceph versions 2026-04-16 10:14:53.115868 | orchestrator | { 2026-04-16 10:14:53.115936 | orchestrator | "mon": { 2026-04-16 10:14:53.115943 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 10:14:53.115948 | orchestrator | }, 2026-04-16 10:14:53.115952 | orchestrator | "mgr": { 2026-04-16 10:14:53.115956 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 10:14:53.115960 | orchestrator | }, 2026-04-16 10:14:53.115964 | orchestrator | "osd": { 2026-04-16 10:14:53.115968 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-04-16 10:14:53.115972 | orchestrator | }, 2026-04-16 10:14:53.115976 | orchestrator | "mds": { 2026-04-16 10:14:53.115980 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 10:14:53.116000 | orchestrator | }, 2026-04-16 10:14:53.116004 | orchestrator | "rgw": { 2026-04-16 10:14:53.116008 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-04-16 10:14:53.116012 | orchestrator | }, 2026-04-16 10:14:53.116016 | orchestrator | "overall": { 2026-04-16 10:14:53.116020 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-04-16 10:14:53.116024 | orchestrator | } 2026-04-16 10:14:53.116028 | orchestrator | } 2026-04-16 10:14:53.159219 | orchestrator | 2026-04-16 10:14:53.159311 | orchestrator | # Ceph OSD tree 2026-04-16 10:14:53.159324 | orchestrator | 2026-04-16 10:14:53.159335 | orchestrator | + echo 2026-04-16 10:14:53.159347 | orchestrator | + echo '# Ceph OSD tree' 2026-04-16 10:14:53.159365 | orchestrator | + echo 2026-04-16 10:14:53.159381 | orchestrator | + ceph osd df tree 2026-04-16 10:14:53.643732 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-04-16 10:14:53.643842 | orchestrator | -1 0.11691 - 120 GiB 7.9 GiB 7.6 GiB 45 KiB 309 MiB 112 GiB 6.62 1.00 - root default 2026-04-16 10:14:53.643856 | orchestrator | -5 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 108 MiB 37 GiB 6.63 1.00 - host testbed-node-3 2026-04-16 10:14:53.643868 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 7 KiB 54 MiB 19 GiB 6.15 0.93 174 up osd.0 2026-04-16 10:14:53.643879 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 8 KiB 54 MiB 19 GiB 7.11 1.08 218 up osd.3 2026-04-16 10:14:53.643890 | orchestrator | -3 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 108 MiB 37 GiB 6.63 1.00 - host testbed-node-4 2026-04-16 10:14:53.643930 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.6 GiB 6 KiB 50 MiB 18 GiB 8.06 1.22 195 up osd.2 2026-04-16 10:14:53.643942 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 1005 MiB 9 KiB 58 MiB 19 GiB 5.20 0.79 195 up osd.4 2026-04-16 10:14:53.643953 | orchestrator | -7 0.03897 - 40 GiB 2.6 GiB 2.5 GiB 15 KiB 92 MiB 37 GiB 6.59 1.00 - host testbed-node-5 2026-04-16 10:14:53.643964 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 8 KiB 46 MiB 18 GiB 7.90 1.19 197 up osd.1 2026-04-16 10:14:53.643975 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 7 KiB 46 MiB 19 GiB 5.28 0.80 191 up osd.5 2026-04-16 10:14:53.643986 | orchestrator | TOTAL 120 GiB 7.9 GiB 7.6 GiB 48 KiB 309 MiB 112 GiB 6.62 2026-04-16 10:14:53.643998 | orchestrator | MIN/MAX VAR: 0.79/1.22 STDDEV: 1.16 2026-04-16 10:14:53.691062 | orchestrator | 2026-04-16 10:14:53.691189 | orchestrator | # Ceph monitor status 2026-04-16 10:14:53.691213 | orchestrator | 2026-04-16 10:14:53.691233 | orchestrator | + echo 2026-04-16 10:14:53.691251 | orchestrator | + echo '# Ceph monitor status' 2026-04-16 10:14:53.691271 | orchestrator | + echo 2026-04-16 10:14:53.691290 | orchestrator | + ceph mon stat 2026-04-16 10:14:54.232264 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 38, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-04-16 10:14:54.275662 | orchestrator | 2026-04-16 10:14:54.275777 | orchestrator | # Ceph quorum status 2026-04-16 10:14:54.275803 | orchestrator | 2026-04-16 10:14:54.275824 | orchestrator | + echo 2026-04-16 10:14:54.275843 | orchestrator | + echo '# Ceph quorum status' 2026-04-16 10:14:54.275864 | orchestrator | + echo 2026-04-16 10:14:54.276648 | orchestrator | + ceph quorum_status 2026-04-16 10:14:54.276680 | orchestrator | + jq 2026-04-16 10:14:54.893698 | orchestrator | { 2026-04-16 10:14:54.893798 | orchestrator | "election_epoch": 38, 2026-04-16 10:14:54.893814 | orchestrator | "quorum": [ 2026-04-16 10:14:54.893825 | orchestrator | 0, 2026-04-16 10:14:54.893834 | orchestrator | 1, 2026-04-16 10:14:54.893843 | orchestrator | 2 2026-04-16 10:14:54.893851 | orchestrator | ], 2026-04-16 10:14:54.893860 | orchestrator | "quorum_names": [ 2026-04-16 10:14:54.893869 | orchestrator | "testbed-node-0", 2026-04-16 10:14:54.893902 | orchestrator | "testbed-node-1", 2026-04-16 10:14:54.893923 | orchestrator | "testbed-node-2" 2026-04-16 10:14:54.893933 | orchestrator | ], 2026-04-16 10:14:54.893942 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-04-16 10:14:54.893953 | orchestrator | "quorum_age": 7850, 2026-04-16 10:14:54.893961 | orchestrator | "features": { 2026-04-16 10:14:54.893970 | orchestrator | "quorum_con": "4540138322906710015", 2026-04-16 10:14:54.893980 | orchestrator | "quorum_mon": [ 2026-04-16 10:14:54.893989 | orchestrator | "kraken", 2026-04-16 10:14:54.893998 | orchestrator | "luminous", 2026-04-16 10:14:54.894008 | orchestrator | "mimic", 2026-04-16 10:14:54.894138 | orchestrator | "osdmap-prune", 2026-04-16 10:14:54.894153 | orchestrator | "nautilus", 2026-04-16 10:14:54.894163 | orchestrator | "octopus", 2026-04-16 10:14:54.894173 | orchestrator | "pacific", 2026-04-16 10:14:54.894183 | orchestrator | "elector-pinging", 2026-04-16 10:14:54.894192 | orchestrator | "quincy", 2026-04-16 10:14:54.894201 | orchestrator | "reef" 2026-04-16 10:14:54.894220 | orchestrator | ] 2026-04-16 10:14:54.894230 | orchestrator | }, 2026-04-16 10:14:54.894240 | orchestrator | "monmap": { 2026-04-16 10:14:54.894249 | orchestrator | "epoch": 1, 2026-04-16 10:14:54.894260 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-04-16 10:14:54.894272 | orchestrator | "modified": "2026-04-16T05:45:49.118505Z", 2026-04-16 10:14:54.894282 | orchestrator | "created": "2026-04-16T05:45:49.118505Z", 2026-04-16 10:14:54.894290 | orchestrator | "min_mon_release": 18, 2026-04-16 10:14:54.894312 | orchestrator | "min_mon_release_name": "reef", 2026-04-16 10:14:54.894330 | orchestrator | "election_strategy": 1, 2026-04-16 10:14:54.894340 | orchestrator | "disallowed_leaders: ": "", 2026-04-16 10:14:54.894350 | orchestrator | "stretch_mode": false, 2026-04-16 10:14:54.894358 | orchestrator | "tiebreaker_mon": "", 2026-04-16 10:14:54.894367 | orchestrator | "removed_ranks: ": "", 2026-04-16 10:14:54.894376 | orchestrator | "features": { 2026-04-16 10:14:54.894385 | orchestrator | "persistent": [ 2026-04-16 10:14:54.894394 | orchestrator | "kraken", 2026-04-16 10:14:54.894404 | orchestrator | "luminous", 2026-04-16 10:14:54.894413 | orchestrator | "mimic", 2026-04-16 10:14:54.894423 | orchestrator | "osdmap-prune", 2026-04-16 10:14:54.894432 | orchestrator | "nautilus", 2026-04-16 10:14:54.894454 | orchestrator | "octopus", 2026-04-16 10:14:54.894463 | orchestrator | "pacific", 2026-04-16 10:14:54.894480 | orchestrator | "elector-pinging", 2026-04-16 10:14:54.894490 | orchestrator | "quincy", 2026-04-16 10:14:54.894498 | orchestrator | "reef" 2026-04-16 10:14:54.894507 | orchestrator | ], 2026-04-16 10:14:54.894516 | orchestrator | "optional": [] 2026-04-16 10:14:54.894525 | orchestrator | }, 2026-04-16 10:14:54.894552 | orchestrator | "mons": [ 2026-04-16 10:14:54.894561 | orchestrator | { 2026-04-16 10:14:54.894569 | orchestrator | "rank": 0, 2026-04-16 10:14:54.894577 | orchestrator | "name": "testbed-node-0", 2026-04-16 10:14:54.894585 | orchestrator | "public_addrs": { 2026-04-16 10:14:54.894594 | orchestrator | "addrvec": [ 2026-04-16 10:14:54.894603 | orchestrator | { 2026-04-16 10:14:54.894611 | orchestrator | "type": "v2", 2026-04-16 10:14:54.894619 | orchestrator | "addr": "192.168.16.8:3300", 2026-04-16 10:14:54.894628 | orchestrator | "nonce": 0 2026-04-16 10:14:54.894637 | orchestrator | }, 2026-04-16 10:14:54.894646 | orchestrator | { 2026-04-16 10:14:54.894655 | orchestrator | "type": "v1", 2026-04-16 10:14:54.894663 | orchestrator | "addr": "192.168.16.8:6789", 2026-04-16 10:14:54.894672 | orchestrator | "nonce": 0 2026-04-16 10:14:54.894681 | orchestrator | } 2026-04-16 10:14:54.894689 | orchestrator | ] 2026-04-16 10:14:54.894698 | orchestrator | }, 2026-04-16 10:14:54.894707 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-04-16 10:14:54.894715 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-04-16 10:14:54.894725 | orchestrator | "priority": 0, 2026-04-16 10:14:54.894734 | orchestrator | "weight": 0, 2026-04-16 10:14:54.894742 | orchestrator | "crush_location": "{}" 2026-04-16 10:14:54.894750 | orchestrator | }, 2026-04-16 10:14:54.894758 | orchestrator | { 2026-04-16 10:14:54.894767 | orchestrator | "rank": 1, 2026-04-16 10:14:54.894776 | orchestrator | "name": "testbed-node-1", 2026-04-16 10:14:54.894785 | orchestrator | "public_addrs": { 2026-04-16 10:14:54.894794 | orchestrator | "addrvec": [ 2026-04-16 10:14:54.894802 | orchestrator | { 2026-04-16 10:14:54.894810 | orchestrator | "type": "v2", 2026-04-16 10:14:54.894818 | orchestrator | "addr": "192.168.16.11:3300", 2026-04-16 10:14:54.894846 | orchestrator | "nonce": 0 2026-04-16 10:14:54.894856 | orchestrator | }, 2026-04-16 10:14:54.894863 | orchestrator | { 2026-04-16 10:14:54.894872 | orchestrator | "type": "v1", 2026-04-16 10:14:54.894881 | orchestrator | "addr": "192.168.16.11:6789", 2026-04-16 10:14:54.894889 | orchestrator | "nonce": 0 2026-04-16 10:14:54.894897 | orchestrator | } 2026-04-16 10:14:54.894906 | orchestrator | ] 2026-04-16 10:14:54.894915 | orchestrator | }, 2026-04-16 10:14:54.894924 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-04-16 10:14:54.894934 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-04-16 10:14:54.894943 | orchestrator | "priority": 0, 2026-04-16 10:14:54.894951 | orchestrator | "weight": 0, 2026-04-16 10:14:54.894959 | orchestrator | "crush_location": "{}" 2026-04-16 10:14:54.894968 | orchestrator | }, 2026-04-16 10:14:54.894977 | orchestrator | { 2026-04-16 10:14:54.894985 | orchestrator | "rank": 2, 2026-04-16 10:14:54.894994 | orchestrator | "name": "testbed-node-2", 2026-04-16 10:14:54.895002 | orchestrator | "public_addrs": { 2026-04-16 10:14:54.895010 | orchestrator | "addrvec": [ 2026-04-16 10:14:54.895019 | orchestrator | { 2026-04-16 10:14:54.895027 | orchestrator | "type": "v2", 2026-04-16 10:14:54.895036 | orchestrator | "addr": "192.168.16.12:3300", 2026-04-16 10:14:54.895044 | orchestrator | "nonce": 0 2026-04-16 10:14:54.895053 | orchestrator | }, 2026-04-16 10:14:54.895061 | orchestrator | { 2026-04-16 10:14:54.895069 | orchestrator | "type": "v1", 2026-04-16 10:14:54.895078 | orchestrator | "addr": "192.168.16.12:6789", 2026-04-16 10:14:54.895086 | orchestrator | "nonce": 0 2026-04-16 10:14:54.895095 | orchestrator | } 2026-04-16 10:14:54.895103 | orchestrator | ] 2026-04-16 10:14:54.895112 | orchestrator | }, 2026-04-16 10:14:54.895121 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-04-16 10:14:54.895130 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-04-16 10:14:54.895138 | orchestrator | "priority": 0, 2026-04-16 10:14:54.895148 | orchestrator | "weight": 0, 2026-04-16 10:14:54.895162 | orchestrator | "crush_location": "{}" 2026-04-16 10:14:54.895171 | orchestrator | } 2026-04-16 10:14:54.895179 | orchestrator | ] 2026-04-16 10:14:54.895188 | orchestrator | } 2026-04-16 10:14:54.895197 | orchestrator | } 2026-04-16 10:14:54.895205 | orchestrator | 2026-04-16 10:14:54.895215 | orchestrator | # Ceph free space status 2026-04-16 10:14:54.895224 | orchestrator | 2026-04-16 10:14:54.895232 | orchestrator | + echo 2026-04-16 10:14:54.895241 | orchestrator | + echo '# Ceph free space status' 2026-04-16 10:14:54.895250 | orchestrator | + echo 2026-04-16 10:14:54.895261 | orchestrator | + ceph df 2026-04-16 10:14:55.500037 | orchestrator | --- RAW STORAGE --- 2026-04-16 10:14:55.500147 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-04-16 10:14:55.500190 | orchestrator | hdd 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.62 2026-04-16 10:14:55.500217 | orchestrator | TOTAL 120 GiB 112 GiB 7.9 GiB 7.9 GiB 6.62 2026-04-16 10:14:55.500236 | orchestrator | 2026-04-16 10:14:55.500255 | orchestrator | --- POOLS --- 2026-04-16 10:14:55.500274 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-04-16 10:14:55.500295 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-04-16 10:14:55.500315 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-04-16 10:14:55.500334 | orchestrator | cephfs_metadata 3 16 9.0 KiB 22 113 KiB 0 35 GiB 2026-04-16 10:14:55.500351 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-04-16 10:14:55.500362 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-04-16 10:14:55.500374 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-04-16 10:14:55.500385 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-04-16 10:14:55.500396 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-04-16 10:14:55.500421 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-04-16 10:14:55.500432 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 10:14:55.500443 | orchestrator | volumes 11 32 325 MiB 267 974 MiB 0.90 35 GiB 2026-04-16 10:14:55.500474 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 6.00 35 GiB 2026-04-16 10:14:55.500485 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 10:14:55.500496 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-04-16 10:14:55.543498 | orchestrator | ++ semver 10.0.0 5.0.0 2026-04-16 10:14:55.600728 | orchestrator | + [[ 1 -eq -1 ]] 2026-04-16 10:14:55.600828 | orchestrator | + osism apply facts 2026-04-16 10:14:56.897894 | orchestrator | 2026-04-16 10:14:56 | INFO  | Prepare task for execution of facts. 2026-04-16 10:14:56.961664 | orchestrator | 2026-04-16 10:14:56 | INFO  | Task 491deb73-ad11-4fc7-b8dd-6c5331cb4c2b (facts) was prepared for execution. 2026-04-16 10:14:56.961768 | orchestrator | 2026-04-16 10:14:56 | INFO  | It takes a moment until task 491deb73-ad11-4fc7-b8dd-6c5331cb4c2b (facts) has been started and output is visible here. 2026-04-16 10:15:18.423295 | orchestrator | 2026-04-16 10:15:18.423401 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-04-16 10:15:18.423415 | orchestrator | 2026-04-16 10:15:18.423423 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-04-16 10:15:18.423433 | orchestrator | Thursday 16 April 2026 10:15:02 +0000 (0:00:02.099) 0:00:02.099 ******** 2026-04-16 10:15:18.423440 | orchestrator | ok: [testbed-manager] 2026-04-16 10:15:18.423449 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:15:18.423455 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:15:18.423462 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:15:18.423468 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:15:18.423475 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:15:18.423482 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:15:18.423489 | orchestrator | 2026-04-16 10:15:18.423495 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-04-16 10:15:18.423502 | orchestrator | Thursday 16 April 2026 10:15:05 +0000 (0:00:03.107) 0:00:05.206 ******** 2026-04-16 10:15:18.423509 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:15:18.423518 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:15:18.423526 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:15:18.423534 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:15:18.423541 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:15:18.423548 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:15:18.423555 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:15:18.423561 | orchestrator | 2026-04-16 10:15:18.423568 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-04-16 10:15:18.423575 | orchestrator | 2026-04-16 10:15:18.423582 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-04-16 10:15:18.423649 | orchestrator | Thursday 16 April 2026 10:15:08 +0000 (0:00:03.193) 0:00:08.399 ******** 2026-04-16 10:15:18.423657 | orchestrator | ok: [testbed-manager] 2026-04-16 10:15:18.423665 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:15:18.423672 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:15:18.423679 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:15:18.423686 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:15:18.423693 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:15:18.423701 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:15:18.423709 | orchestrator | 2026-04-16 10:15:18.423716 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-04-16 10:15:18.423723 | orchestrator | 2026-04-16 10:15:18.423731 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-04-16 10:15:18.423738 | orchestrator | Thursday 16 April 2026 10:15:15 +0000 (0:00:07.058) 0:00:15.457 ******** 2026-04-16 10:15:18.423745 | orchestrator | skipping: [testbed-manager] 2026-04-16 10:15:18.423752 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:15:18.423759 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:15:18.423766 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:15:18.423774 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:15:18.423781 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:15:18.423812 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:15:18.423820 | orchestrator | 2026-04-16 10:15:18.423829 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:15:18.423838 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423847 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423854 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423862 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423870 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423877 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423885 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:15:18.423893 | orchestrator | 2026-04-16 10:15:18.423902 | orchestrator | 2026-04-16 10:15:18.423909 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:15:18.423917 | orchestrator | Thursday 16 April 2026 10:15:18 +0000 (0:00:02.438) 0:00:17.896 ******** 2026-04-16 10:15:18.423925 | orchestrator | =============================================================================== 2026-04-16 10:15:18.423932 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.06s 2026-04-16 10:15:18.423939 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 3.19s 2026-04-16 10:15:18.423947 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 3.11s 2026-04-16 10:15:18.423954 | orchestrator | Gather facts for all hosts ---------------------------------------------- 2.44s 2026-04-16 10:15:18.541398 | orchestrator | + osism validate ceph-mons 2026-04-16 10:16:26.968996 | orchestrator | 2026-04-16 10:16:26.969142 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-04-16 10:16:26.969162 | orchestrator | 2026-04-16 10:16:26.969175 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-16 10:16:26.969188 | orchestrator | Thursday 16 April 2026 10:15:34 +0000 (0:00:01.526) 0:00:01.526 ******** 2026-04-16 10:16:26.969199 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:26.969211 | orchestrator | 2026-04-16 10:16:26.969222 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-16 10:16:26.969233 | orchestrator | Thursday 16 April 2026 10:15:36 +0000 (0:00:02.348) 0:00:03.875 ******** 2026-04-16 10:16:26.969244 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:26.969255 | orchestrator | 2026-04-16 10:16:26.969267 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-16 10:16:26.969278 | orchestrator | Thursday 16 April 2026 10:15:38 +0000 (0:00:01.582) 0:00:05.457 ******** 2026-04-16 10:16:26.969289 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.969301 | orchestrator | 2026-04-16 10:16:26.969313 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-16 10:16:26.969324 | orchestrator | Thursday 16 April 2026 10:15:39 +0000 (0:00:01.101) 0:00:06.559 ******** 2026-04-16 10:16:26.969335 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.969346 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:16:26.969358 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:16:26.969377 | orchestrator | 2026-04-16 10:16:26.969395 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-16 10:16:26.969457 | orchestrator | Thursday 16 April 2026 10:15:40 +0000 (0:00:01.717) 0:00:08.276 ******** 2026-04-16 10:16:26.969478 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:16:26.969496 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:16:26.969513 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.969530 | orchestrator | 2026-04-16 10:16:26.969549 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-16 10:16:26.969567 | orchestrator | Thursday 16 April 2026 10:15:43 +0000 (0:00:02.485) 0:00:10.761 ******** 2026-04-16 10:16:26.969606 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.969627 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:16:26.969646 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:16:26.969665 | orchestrator | 2026-04-16 10:16:26.969684 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-16 10:16:26.969703 | orchestrator | Thursday 16 April 2026 10:15:44 +0000 (0:00:01.434) 0:00:12.196 ******** 2026-04-16 10:16:26.969723 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.969741 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:16:26.969792 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:16:26.969812 | orchestrator | 2026-04-16 10:16:26.969833 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:16:26.969852 | orchestrator | Thursday 16 April 2026 10:15:46 +0000 (0:00:01.384) 0:00:13.580 ******** 2026-04-16 10:16:26.969873 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.969892 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:16:26.969912 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:16:26.969931 | orchestrator | 2026-04-16 10:16:26.969951 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-04-16 10:16:26.969971 | orchestrator | Thursday 16 April 2026 10:15:47 +0000 (0:00:01.300) 0:00:14.881 ******** 2026-04-16 10:16:26.969990 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970009 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:16:26.970109 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:16:26.970130 | orchestrator | 2026-04-16 10:16:26.970148 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-04-16 10:16:26.970165 | orchestrator | Thursday 16 April 2026 10:15:49 +0000 (0:00:01.610) 0:00:16.492 ******** 2026-04-16 10:16:26.970196 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.970216 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:16:26.970235 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:16:26.970251 | orchestrator | 2026-04-16 10:16:26.970268 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 10:16:26.970285 | orchestrator | Thursday 16 April 2026 10:15:50 +0000 (0:00:01.346) 0:00:17.838 ******** 2026-04-16 10:16:26.970304 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970321 | orchestrator | 2026-04-16 10:16:26.970340 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 10:16:26.970358 | orchestrator | Thursday 16 April 2026 10:15:51 +0000 (0:00:01.298) 0:00:19.136 ******** 2026-04-16 10:16:26.970377 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970395 | orchestrator | 2026-04-16 10:16:26.970415 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 10:16:26.970432 | orchestrator | Thursday 16 April 2026 10:15:53 +0000 (0:00:01.262) 0:00:20.399 ******** 2026-04-16 10:16:26.970451 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970469 | orchestrator | 2026-04-16 10:16:26.970489 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:16:26.970508 | orchestrator | Thursday 16 April 2026 10:15:54 +0000 (0:00:01.287) 0:00:21.688 ******** 2026-04-16 10:16:26.970526 | orchestrator | 2026-04-16 10:16:26.970545 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:16:26.970557 | orchestrator | Thursday 16 April 2026 10:15:54 +0000 (0:00:00.463) 0:00:22.152 ******** 2026-04-16 10:16:26.970567 | orchestrator | 2026-04-16 10:16:26.970579 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:16:26.970599 | orchestrator | Thursday 16 April 2026 10:15:55 +0000 (0:00:00.684) 0:00:22.836 ******** 2026-04-16 10:16:26.970624 | orchestrator | 2026-04-16 10:16:26.970636 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 10:16:26.970646 | orchestrator | Thursday 16 April 2026 10:15:56 +0000 (0:00:00.836) 0:00:23.672 ******** 2026-04-16 10:16:26.970657 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970668 | orchestrator | 2026-04-16 10:16:26.970679 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-16 10:16:26.970691 | orchestrator | Thursday 16 April 2026 10:15:57 +0000 (0:00:01.285) 0:00:24.958 ******** 2026-04-16 10:16:26.970702 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970713 | orchestrator | 2026-04-16 10:16:26.970777 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-04-16 10:16:26.970790 | orchestrator | Thursday 16 April 2026 10:15:59 +0000 (0:00:01.339) 0:00:26.298 ******** 2026-04-16 10:16:26.970801 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.970812 | orchestrator | 2026-04-16 10:16:26.970823 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-04-16 10:16:26.970834 | orchestrator | Thursday 16 April 2026 10:16:00 +0000 (0:00:01.087) 0:00:27.385 ******** 2026-04-16 10:16:26.970845 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:16:26.970856 | orchestrator | 2026-04-16 10:16:26.970867 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-04-16 10:16:26.970878 | orchestrator | Thursday 16 April 2026 10:16:02 +0000 (0:00:02.852) 0:00:30.237 ******** 2026-04-16 10:16:26.970889 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.970900 | orchestrator | 2026-04-16 10:16:26.970911 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-04-16 10:16:26.970922 | orchestrator | Thursday 16 April 2026 10:16:04 +0000 (0:00:01.321) 0:00:31.558 ******** 2026-04-16 10:16:26.970933 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.970944 | orchestrator | 2026-04-16 10:16:26.970955 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-04-16 10:16:26.970966 | orchestrator | Thursday 16 April 2026 10:16:05 +0000 (0:00:01.100) 0:00:32.659 ******** 2026-04-16 10:16:26.970977 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.970988 | orchestrator | 2026-04-16 10:16:26.970999 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-04-16 10:16:26.971010 | orchestrator | Thursday 16 April 2026 10:16:06 +0000 (0:00:01.323) 0:00:33.983 ******** 2026-04-16 10:16:26.971021 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.971032 | orchestrator | 2026-04-16 10:16:26.971043 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-04-16 10:16:26.971054 | orchestrator | Thursday 16 April 2026 10:16:08 +0000 (0:00:01.370) 0:00:35.353 ******** 2026-04-16 10:16:26.971065 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.971076 | orchestrator | 2026-04-16 10:16:26.971087 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-04-16 10:16:26.971098 | orchestrator | Thursday 16 April 2026 10:16:09 +0000 (0:00:01.124) 0:00:36.477 ******** 2026-04-16 10:16:26.971108 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.971119 | orchestrator | 2026-04-16 10:16:26.971130 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-04-16 10:16:26.971141 | orchestrator | Thursday 16 April 2026 10:16:10 +0000 (0:00:01.134) 0:00:37.612 ******** 2026-04-16 10:16:26.971152 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.971163 | orchestrator | 2026-04-16 10:16:26.971230 | orchestrator | TASK [Gather status data] ****************************************************** 2026-04-16 10:16:26.971244 | orchestrator | Thursday 16 April 2026 10:16:11 +0000 (0:00:01.140) 0:00:38.752 ******** 2026-04-16 10:16:26.971255 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:16:26.971266 | orchestrator | 2026-04-16 10:16:26.971277 | orchestrator | TASK [Set health test data] **************************************************** 2026-04-16 10:16:26.971288 | orchestrator | Thursday 16 April 2026 10:16:13 +0000 (0:00:02.302) 0:00:41.054 ******** 2026-04-16 10:16:26.971307 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.971318 | orchestrator | 2026-04-16 10:16:26.971329 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-04-16 10:16:26.971340 | orchestrator | Thursday 16 April 2026 10:16:15 +0000 (0:00:01.283) 0:00:42.338 ******** 2026-04-16 10:16:26.971351 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.971362 | orchestrator | 2026-04-16 10:16:26.971373 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-04-16 10:16:26.971384 | orchestrator | Thursday 16 April 2026 10:16:16 +0000 (0:00:01.164) 0:00:43.503 ******** 2026-04-16 10:16:26.971395 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:16:26.971406 | orchestrator | 2026-04-16 10:16:26.971416 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-04-16 10:16:26.971427 | orchestrator | Thursday 16 April 2026 10:16:17 +0000 (0:00:01.174) 0:00:44.677 ******** 2026-04-16 10:16:26.971438 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.971449 | orchestrator | 2026-04-16 10:16:26.971460 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-04-16 10:16:26.971471 | orchestrator | Thursday 16 April 2026 10:16:18 +0000 (0:00:01.149) 0:00:45.827 ******** 2026-04-16 10:16:26.971482 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.971493 | orchestrator | 2026-04-16 10:16:26.971504 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-16 10:16:26.971515 | orchestrator | Thursday 16 April 2026 10:16:19 +0000 (0:00:01.154) 0:00:46.982 ******** 2026-04-16 10:16:26.971525 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:26.971537 | orchestrator | 2026-04-16 10:16:26.971548 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-16 10:16:26.971558 | orchestrator | Thursday 16 April 2026 10:16:20 +0000 (0:00:01.267) 0:00:48.249 ******** 2026-04-16 10:16:26.971569 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:16:26.971580 | orchestrator | 2026-04-16 10:16:26.971591 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 10:16:26.971602 | orchestrator | Thursday 16 April 2026 10:16:22 +0000 (0:00:01.233) 0:00:49.483 ******** 2026-04-16 10:16:26.971613 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:26.971624 | orchestrator | 2026-04-16 10:16:26.971641 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 10:16:26.971652 | orchestrator | Thursday 16 April 2026 10:16:25 +0000 (0:00:02.856) 0:00:52.339 ******** 2026-04-16 10:16:26.971663 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:26.971674 | orchestrator | 2026-04-16 10:16:26.971685 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 10:16:26.971696 | orchestrator | Thursday 16 April 2026 10:16:26 +0000 (0:00:01.609) 0:00:53.949 ******** 2026-04-16 10:16:26.971708 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:26.971718 | orchestrator | 2026-04-16 10:16:26.971737 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:16:34.433297 | orchestrator | Thursday 16 April 2026 10:16:27 +0000 (0:00:01.267) 0:00:55.216 ******** 2026-04-16 10:16:34.433441 | orchestrator | 2026-04-16 10:16:34.433470 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:16:34.433489 | orchestrator | Thursday 16 April 2026 10:16:28 +0000 (0:00:00.475) 0:00:55.692 ******** 2026-04-16 10:16:34.433508 | orchestrator | 2026-04-16 10:16:34.433526 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:16:34.433545 | orchestrator | Thursday 16 April 2026 10:16:28 +0000 (0:00:00.433) 0:00:56.125 ******** 2026-04-16 10:16:34.433564 | orchestrator | 2026-04-16 10:16:34.433580 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-16 10:16:34.433598 | orchestrator | Thursday 16 April 2026 10:16:29 +0000 (0:00:00.851) 0:00:56.977 ******** 2026-04-16 10:16:34.433617 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:16:34.433669 | orchestrator | 2026-04-16 10:16:34.433690 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 10:16:34.433707 | orchestrator | Thursday 16 April 2026 10:16:32 +0000 (0:00:02.381) 0:00:59.359 ******** 2026-04-16 10:16:34.433726 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-16 10:16:34.433743 | orchestrator |  "msg": [ 2026-04-16 10:16:34.433793 | orchestrator |  "Validator run completed.", 2026-04-16 10:16:34.433817 | orchestrator |  "You can find the report file here:", 2026-04-16 10:16:34.433837 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-04-16T10:15:35+00:00-report.json", 2026-04-16 10:16:34.433857 | orchestrator |  "on the following host:", 2026-04-16 10:16:34.433877 | orchestrator |  "testbed-manager" 2026-04-16 10:16:34.433896 | orchestrator |  ] 2026-04-16 10:16:34.433916 | orchestrator | } 2026-04-16 10:16:34.433935 | orchestrator | 2026-04-16 10:16:34.433954 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:16:34.433967 | orchestrator | testbed-node-0 : ok=24  changed=4  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-04-16 10:16:34.433982 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:16:34.433995 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:16:34.434008 | orchestrator | 2026-04-16 10:16:34.434089 | orchestrator | 2026-04-16 10:16:34.434101 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:16:34.434114 | orchestrator | Thursday 16 April 2026 10:16:34 +0000 (0:00:02.014) 0:01:01.373 ******** 2026-04-16 10:16:34.434126 | orchestrator | =============================================================================== 2026-04-16 10:16:34.434139 | orchestrator | Aggregate test results step one ----------------------------------------- 2.86s 2026-04-16 10:16:34.434152 | orchestrator | Get monmap info from one mon container ---------------------------------- 2.85s 2026-04-16 10:16:34.434164 | orchestrator | Get container info ------------------------------------------------------ 2.49s 2026-04-16 10:16:34.434175 | orchestrator | Write report file ------------------------------------------------------- 2.38s 2026-04-16 10:16:34.434186 | orchestrator | Get timestamp for report file ------------------------------------------- 2.35s 2026-04-16 10:16:34.434197 | orchestrator | Gather status data ------------------------------------------------------ 2.30s 2026-04-16 10:16:34.434208 | orchestrator | Print report file information ------------------------------------------- 2.01s 2026-04-16 10:16:34.434218 | orchestrator | Flush handlers ---------------------------------------------------------- 1.98s 2026-04-16 10:16:34.434266 | orchestrator | Flush handlers ---------------------------------------------------------- 1.76s 2026-04-16 10:16:34.434278 | orchestrator | Prepare test data for container existance test -------------------------- 1.71s 2026-04-16 10:16:34.434289 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 1.61s 2026-04-16 10:16:34.434300 | orchestrator | Aggregate test results step two ----------------------------------------- 1.61s 2026-04-16 10:16:34.434311 | orchestrator | Create report output directory ------------------------------------------ 1.58s 2026-04-16 10:16:34.434322 | orchestrator | Set test result to failed if container is missing ----------------------- 1.43s 2026-04-16 10:16:34.434332 | orchestrator | Set test result to passed if container is existing ---------------------- 1.38s 2026-04-16 10:16:34.434343 | orchestrator | Set fsid test vars ------------------------------------------------------ 1.37s 2026-04-16 10:16:34.434354 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 1.35s 2026-04-16 10:16:34.434365 | orchestrator | Fail due to missing containers ------------------------------------------ 1.34s 2026-04-16 10:16:34.434375 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 1.32s 2026-04-16 10:16:34.434400 | orchestrator | Set quorum test data ---------------------------------------------------- 1.32s 2026-04-16 10:16:34.596496 | orchestrator | + osism validate ceph-mgrs 2026-04-16 10:17:36.458734 | orchestrator | 2026-04-16 10:17:36.458869 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-04-16 10:17:36.458888 | orchestrator | 2026-04-16 10:17:36.458964 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-16 10:17:36.458990 | orchestrator | Thursday 16 April 2026 10:16:50 +0000 (0:00:01.782) 0:00:01.782 ******** 2026-04-16 10:17:36.459019 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.459037 | orchestrator | 2026-04-16 10:17:36.459055 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-16 10:17:36.459073 | orchestrator | Thursday 16 April 2026 10:16:53 +0000 (0:00:02.643) 0:00:04.426 ******** 2026-04-16 10:17:36.459091 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.459109 | orchestrator | 2026-04-16 10:17:36.459128 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-16 10:17:36.459147 | orchestrator | Thursday 16 April 2026 10:16:55 +0000 (0:00:01.603) 0:00:06.029 ******** 2026-04-16 10:17:36.459166 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.459186 | orchestrator | 2026-04-16 10:17:36.459205 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-04-16 10:17:36.459225 | orchestrator | Thursday 16 April 2026 10:16:56 +0000 (0:00:01.122) 0:00:07.152 ******** 2026-04-16 10:17:36.459245 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.459272 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:17:36.459293 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:17:36.459311 | orchestrator | 2026-04-16 10:17:36.459329 | orchestrator | TASK [Get container info] ****************************************************** 2026-04-16 10:17:36.459418 | orchestrator | Thursday 16 April 2026 10:16:57 +0000 (0:00:01.614) 0:00:08.767 ******** 2026-04-16 10:17:36.459440 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:17:36.459460 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.459476 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:17:36.459488 | orchestrator | 2026-04-16 10:17:36.459501 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-04-16 10:17:36.459513 | orchestrator | Thursday 16 April 2026 10:17:00 +0000 (0:00:02.519) 0:00:11.287 ******** 2026-04-16 10:17:36.459526 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.459538 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:17:36.459551 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:17:36.459563 | orchestrator | 2026-04-16 10:17:36.459575 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-04-16 10:17:36.459586 | orchestrator | Thursday 16 April 2026 10:17:01 +0000 (0:00:01.312) 0:00:12.599 ******** 2026-04-16 10:17:36.459598 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.459608 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:17:36.459619 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:17:36.459630 | orchestrator | 2026-04-16 10:17:36.459641 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:17:36.459652 | orchestrator | Thursday 16 April 2026 10:17:02 +0000 (0:00:01.287) 0:00:13.887 ******** 2026-04-16 10:17:36.459663 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.459673 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:17:36.459684 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:17:36.459695 | orchestrator | 2026-04-16 10:17:36.459706 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-04-16 10:17:36.459717 | orchestrator | Thursday 16 April 2026 10:17:04 +0000 (0:00:01.317) 0:00:15.205 ******** 2026-04-16 10:17:36.459728 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.459739 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:17:36.459750 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:17:36.459764 | orchestrator | 2026-04-16 10:17:36.459787 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-04-16 10:17:36.459851 | orchestrator | Thursday 16 April 2026 10:17:05 +0000 (0:00:01.316) 0:00:16.522 ******** 2026-04-16 10:17:36.459870 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.459888 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:17:36.459937 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:17:36.459953 | orchestrator | 2026-04-16 10:17:36.459969 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 10:17:36.459988 | orchestrator | Thursday 16 April 2026 10:17:06 +0000 (0:00:01.320) 0:00:17.842 ******** 2026-04-16 10:17:36.460007 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.460025 | orchestrator | 2026-04-16 10:17:36.460042 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 10:17:36.460059 | orchestrator | Thursday 16 April 2026 10:17:08 +0000 (0:00:01.225) 0:00:19.068 ******** 2026-04-16 10:17:36.460078 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.460099 | orchestrator | 2026-04-16 10:17:36.460117 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 10:17:36.460137 | orchestrator | Thursday 16 April 2026 10:17:09 +0000 (0:00:01.222) 0:00:20.290 ******** 2026-04-16 10:17:36.460156 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.460174 | orchestrator | 2026-04-16 10:17:36.460192 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:17:36.460211 | orchestrator | Thursday 16 April 2026 10:17:10 +0000 (0:00:01.277) 0:00:21.568 ******** 2026-04-16 10:17:36.460229 | orchestrator | 2026-04-16 10:17:36.460248 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:17:36.460267 | orchestrator | Thursday 16 April 2026 10:17:11 +0000 (0:00:00.447) 0:00:22.015 ******** 2026-04-16 10:17:36.460285 | orchestrator | 2026-04-16 10:17:36.460299 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:17:36.460310 | orchestrator | Thursday 16 April 2026 10:17:11 +0000 (0:00:00.615) 0:00:22.631 ******** 2026-04-16 10:17:36.460321 | orchestrator | 2026-04-16 10:17:36.460332 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 10:17:36.460342 | orchestrator | Thursday 16 April 2026 10:17:12 +0000 (0:00:00.820) 0:00:23.451 ******** 2026-04-16 10:17:36.460353 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.460364 | orchestrator | 2026-04-16 10:17:36.460376 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-04-16 10:17:36.460399 | orchestrator | Thursday 16 April 2026 10:17:13 +0000 (0:00:01.249) 0:00:24.701 ******** 2026-04-16 10:17:36.460426 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.460443 | orchestrator | 2026-04-16 10:17:36.460489 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-04-16 10:17:36.460507 | orchestrator | Thursday 16 April 2026 10:17:15 +0000 (0:00:01.264) 0:00:25.965 ******** 2026-04-16 10:17:36.460522 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.460539 | orchestrator | 2026-04-16 10:17:36.460557 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-04-16 10:17:36.460575 | orchestrator | Thursday 16 April 2026 10:17:16 +0000 (0:00:01.097) 0:00:27.063 ******** 2026-04-16 10:17:36.460593 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:17:36.460611 | orchestrator | 2026-04-16 10:17:36.460629 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-04-16 10:17:36.460647 | orchestrator | Thursday 16 April 2026 10:17:19 +0000 (0:00:03.038) 0:00:30.101 ******** 2026-04-16 10:17:36.460664 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.460681 | orchestrator | 2026-04-16 10:17:36.460697 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-04-16 10:17:36.460712 | orchestrator | Thursday 16 April 2026 10:17:20 +0000 (0:00:01.322) 0:00:31.423 ******** 2026-04-16 10:17:36.460730 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.460747 | orchestrator | 2026-04-16 10:17:36.460765 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-04-16 10:17:36.460782 | orchestrator | Thursday 16 April 2026 10:17:21 +0000 (0:00:01.299) 0:00:32.722 ******** 2026-04-16 10:17:36.460820 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.460838 | orchestrator | 2026-04-16 10:17:36.460856 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-04-16 10:17:36.460873 | orchestrator | Thursday 16 April 2026 10:17:22 +0000 (0:00:01.126) 0:00:33.849 ******** 2026-04-16 10:17:36.460890 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:17:36.460972 | orchestrator | 2026-04-16 10:17:36.460992 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-16 10:17:36.461012 | orchestrator | Thursday 16 April 2026 10:17:24 +0000 (0:00:01.174) 0:00:35.023 ******** 2026-04-16 10:17:36.461030 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.461048 | orchestrator | 2026-04-16 10:17:36.461067 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-16 10:17:36.461086 | orchestrator | Thursday 16 April 2026 10:17:25 +0000 (0:00:01.530) 0:00:36.554 ******** 2026-04-16 10:17:36.461104 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:17:36.461123 | orchestrator | 2026-04-16 10:17:36.461136 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 10:17:36.461147 | orchestrator | Thursday 16 April 2026 10:17:27 +0000 (0:00:01.558) 0:00:38.113 ******** 2026-04-16 10:17:36.461158 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.461169 | orchestrator | 2026-04-16 10:17:36.461180 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 10:17:36.461191 | orchestrator | Thursday 16 April 2026 10:17:29 +0000 (0:00:02.316) 0:00:40.429 ******** 2026-04-16 10:17:36.461201 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.461212 | orchestrator | 2026-04-16 10:17:36.461223 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 10:17:36.461234 | orchestrator | Thursday 16 April 2026 10:17:30 +0000 (0:00:01.324) 0:00:41.753 ******** 2026-04-16 10:17:36.461265 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.461276 | orchestrator | 2026-04-16 10:17:36.461287 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:17:36.461298 | orchestrator | Thursday 16 April 2026 10:17:32 +0000 (0:00:01.241) 0:00:42.995 ******** 2026-04-16 10:17:36.461309 | orchestrator | 2026-04-16 10:17:36.461320 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:17:36.461334 | orchestrator | Thursday 16 April 2026 10:17:32 +0000 (0:00:00.447) 0:00:43.442 ******** 2026-04-16 10:17:36.461352 | orchestrator | 2026-04-16 10:17:36.461371 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:17:36.461388 | orchestrator | Thursday 16 April 2026 10:17:32 +0000 (0:00:00.443) 0:00:43.886 ******** 2026-04-16 10:17:36.461406 | orchestrator | 2026-04-16 10:17:36.461423 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-16 10:17:36.461441 | orchestrator | Thursday 16 April 2026 10:17:33 +0000 (0:00:00.784) 0:00:44.670 ******** 2026-04-16 10:17:36.461460 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-04-16 10:17:36.461477 | orchestrator | 2026-04-16 10:17:36.461497 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 10:17:36.461516 | orchestrator | Thursday 16 April 2026 10:17:35 +0000 (0:00:02.271) 0:00:46.942 ******** 2026-04-16 10:17:36.461535 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-04-16 10:17:36.461554 | orchestrator |  "msg": [ 2026-04-16 10:17:36.461573 | orchestrator |  "Validator run completed.", 2026-04-16 10:17:36.461591 | orchestrator |  "You can find the report file here:", 2026-04-16 10:17:36.461611 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-04-16T10:16:51+00:00-report.json", 2026-04-16 10:17:36.461631 | orchestrator |  "on the following host:", 2026-04-16 10:17:36.461650 | orchestrator |  "testbed-manager" 2026-04-16 10:17:36.461669 | orchestrator |  ] 2026-04-16 10:17:36.461702 | orchestrator | } 2026-04-16 10:17:36.461714 | orchestrator | 2026-04-16 10:17:36.461725 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:17:36.461737 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 10:17:36.461749 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:17:36.461782 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:17:38.126717 | orchestrator | 2026-04-16 10:17:38.126821 | orchestrator | 2026-04-16 10:17:38.126827 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:17:38.126834 | orchestrator | Thursday 16 April 2026 10:17:37 +0000 (0:00:01.653) 0:00:48.595 ******** 2026-04-16 10:17:38.126838 | orchestrator | =============================================================================== 2026-04-16 10:17:38.126842 | orchestrator | Gather list of mgr modules ---------------------------------------------- 3.04s 2026-04-16 10:17:38.126847 | orchestrator | Get timestamp for report file ------------------------------------------- 2.64s 2026-04-16 10:17:38.126851 | orchestrator | Get container info ------------------------------------------------------ 2.52s 2026-04-16 10:17:38.126855 | orchestrator | Aggregate test results step one ----------------------------------------- 2.32s 2026-04-16 10:17:38.126859 | orchestrator | Write report file ------------------------------------------------------- 2.27s 2026-04-16 10:17:38.126863 | orchestrator | Flush handlers ---------------------------------------------------------- 1.88s 2026-04-16 10:17:38.126867 | orchestrator | Flush handlers ---------------------------------------------------------- 1.68s 2026-04-16 10:17:38.126871 | orchestrator | Print report file information ------------------------------------------- 1.65s 2026-04-16 10:17:38.126874 | orchestrator | Prepare test data for container existance test -------------------------- 1.61s 2026-04-16 10:17:38.126878 | orchestrator | Create report output directory ------------------------------------------ 1.60s 2026-04-16 10:17:38.126882 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.56s 2026-04-16 10:17:38.126886 | orchestrator | Set validation result to passed if no test failed ----------------------- 1.53s 2026-04-16 10:17:38.126889 | orchestrator | Aggregate test results step two ----------------------------------------- 1.32s 2026-04-16 10:17:38.126893 | orchestrator | Parse mgr module list from json ----------------------------------------- 1.32s 2026-04-16 10:17:38.126897 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 1.32s 2026-04-16 10:17:38.126934 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 1.32s 2026-04-16 10:17:38.126938 | orchestrator | Prepare test data ------------------------------------------------------- 1.32s 2026-04-16 10:17:38.126942 | orchestrator | Set test result to failed if container is missing ----------------------- 1.31s 2026-04-16 10:17:38.126946 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 1.30s 2026-04-16 10:17:38.126950 | orchestrator | Set test result to passed if container is existing ---------------------- 1.29s 2026-04-16 10:17:38.303226 | orchestrator | + osism validate ceph-osds 2026-04-16 10:18:09.226383 | orchestrator | 2026-04-16 10:18:09.226529 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-04-16 10:18:09.226555 | orchestrator | 2026-04-16 10:18:09.226574 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-04-16 10:18:09.226591 | orchestrator | Thursday 16 April 2026 10:17:54 +0000 (0:00:01.574) 0:00:01.574 ******** 2026-04-16 10:18:09.226609 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:18:09.226627 | orchestrator | 2026-04-16 10:18:09.226643 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-04-16 10:18:09.226659 | orchestrator | Thursday 16 April 2026 10:17:56 +0000 (0:00:02.225) 0:00:03.800 ******** 2026-04-16 10:18:09.226704 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:18:09.226721 | orchestrator | 2026-04-16 10:18:09.226736 | orchestrator | TASK [Create report output directory] ****************************************** 2026-04-16 10:18:09.226746 | orchestrator | Thursday 16 April 2026 10:17:58 +0000 (0:00:01.272) 0:00:05.072 ******** 2026-04-16 10:18:09.226756 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:18:09.226766 | orchestrator | 2026-04-16 10:18:09.226776 | orchestrator | TASK [Define report vars] ****************************************************** 2026-04-16 10:18:09.226785 | orchestrator | Thursday 16 April 2026 10:17:59 +0000 (0:00:01.562) 0:00:06.634 ******** 2026-04-16 10:18:09.226795 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:09.226806 | orchestrator | 2026-04-16 10:18:09.226816 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-16 10:18:09.226826 | orchestrator | Thursday 16 April 2026 10:18:00 +0000 (0:00:01.087) 0:00:07.722 ******** 2026-04-16 10:18:09.226836 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:09.226845 | orchestrator | 2026-04-16 10:18:09.226855 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-16 10:18:09.226865 | orchestrator | Thursday 16 April 2026 10:18:01 +0000 (0:00:01.100) 0:00:08.822 ******** 2026-04-16 10:18:09.226875 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:09.226886 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:18:09.226897 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:18:09.226908 | orchestrator | 2026-04-16 10:18:09.226919 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-04-16 10:18:09.226931 | orchestrator | Thursday 16 April 2026 10:18:03 +0000 (0:00:01.871) 0:00:10.694 ******** 2026-04-16 10:18:09.226942 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:09.226953 | orchestrator | 2026-04-16 10:18:09.227008 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-04-16 10:18:09.227054 | orchestrator | Thursday 16 April 2026 10:18:04 +0000 (0:00:01.113) 0:00:11.807 ******** 2026-04-16 10:18:09.227067 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:09.227078 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:09.227090 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:09.227102 | orchestrator | 2026-04-16 10:18:09.227113 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-04-16 10:18:09.227125 | orchestrator | Thursday 16 April 2026 10:18:06 +0000 (0:00:01.360) 0:00:13.167 ******** 2026-04-16 10:18:09.227136 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:09.227148 | orchestrator | 2026-04-16 10:18:09.227158 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:18:09.227184 | orchestrator | Thursday 16 April 2026 10:18:07 +0000 (0:00:01.341) 0:00:14.509 ******** 2026-04-16 10:18:09.227195 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:09.227205 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:09.227215 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:09.227224 | orchestrator | 2026-04-16 10:18:09.227234 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-04-16 10:18:09.227244 | orchestrator | Thursday 16 April 2026 10:18:08 +0000 (0:00:01.332) 0:00:15.842 ******** 2026-04-16 10:18:09.227255 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e834e6770d0d8302cb8d6cca63f837cca19df921715db19381a6809156d5326d', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-16 10:18:09.227268 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6d2ff29d10b98e7fd51fc2286710b37a82c5a2e32d1590fa462dc1abf59fbfc6', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-16 10:18:09.227279 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7d73216896628311f89a195caead579a13442028bd87e935a8d14fbfe04e94ae', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-16 10:18:09.227297 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ba4624ef80cfe1062c06133aa79dafc184a4eee677c7432064dd671b3b6fe08f', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-16 10:18:09.227307 | orchestrator | skipping: [testbed-node-3] => (item={'id': '657e4e23e743e7214a9ec88414b8a4ad4cd3f444506304e56711377c814a2524', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-04-16 10:18:09.227336 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a02f387016c9743fa99aad615141c06b3392e57d7bbb1597905268833d2988f5', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2026-04-16 10:18:09.227347 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f87fa7175226bbd94551e2de5f18e846b0d10d42351f7da26cf3bd670c6e3121', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2026-04-16 10:18:09.227358 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'be6a12293f01ee9317aea8c985a5a9cda5ce8332b9194bda000edaa0264df58f', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 10:18:09.227378 | orchestrator | skipping: [testbed-node-3] => (item={'id': '66e910a150fed41e3e2eccef68419a04f2fc68c2029f10af7bfa3f4f111a5562', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 10:18:09.227388 | orchestrator | skipping: [testbed-node-3] => (item={'id': '899c0a48bbf833f5c65b9a3f2e9db21108a17f51285e6b3e293627ad78d00f58', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 10:18:09.227398 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd36923a0b7e60d53a732d5b24cb63aa5609499ec089cd3fec9cfe2ca30c67852', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 10:18:09.227409 | orchestrator | ok: [testbed-node-3] => (item={'id': '47273f5ec5277a9bd0ef262f01905b0506a130d7ddb17dd96a3f56d4bd1106da', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-16 10:18:09.227420 | orchestrator | ok: [testbed-node-3] => (item={'id': '312c3c10a8165ec9c2b6ac99db7ae636dffe2967cba5ca1ac06e92fdce64274f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-16 10:18:09.227434 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f1e6db16b8d5bb68ea097013a3a748aece0cc398f2d97c3654250187d15e123', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.227445 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7b1e09fa67ae640cdedbbcd2c3ef806f3cb845f744a2a9b684b16ac9d13e43fc', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-16 10:18:09.227454 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c6ddb452aad5d4e5b9131195eead65c5184a534c427f800861f676fee56d4622', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-16 10:18:09.227470 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd3c04ec70354ff27a19a9183e9b0d9b8f9990367a2dc1b6ab20b3a4a8b986b0b', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.227480 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af322f05e9ee60ce2913a111b2c252ad4bb887fd2163203e95d45d8859031e9f', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.227490 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f9fded8f0db4f42fa9526ad691fc107a446dc5748f8b860d7ec9bc341afd4028', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.227503 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'abfa65b1c1105812cd8e1fb95da461e09f1747a9899450e426dccc193c786b68', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-16 10:18:09.227529 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6cd33a06964617fec11ad3434afc6a3720e6ac82fd346448a0df1def1cafcfb8', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-16 10:18:09.399779 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a614c704fe2cd47dfa74d63c103807bd9ed1d4b70760d6c818b9d21e601aceaf', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-16 10:18:09.399949 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e7b56103c09bc46b60e29c37e153a4388b77dee4da95948e35bc0b59faaab8e6', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-16 10:18:09.400084 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8a2f22e1b23ea1265c79920d618daeca3d01a5669c63ecb06551202e33a56d6', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-04-16 10:18:09.400099 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aaf7b6d3d731448c5f203566956331467865084f97ab7e12e21f21b7880b5cea', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2026-04-16 10:18:09.400112 | orchestrator | skipping: [testbed-node-4] => (item={'id': '48798ea7837e582a325df0288a4c85fbe4c83ae345f030756bdf0cabe9516f16', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2026-04-16 10:18:09.400124 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c8ab3a7b15ede1a4fd2c5283a075347b2ebd17b1bb5553c65a32d9e530e413de', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 10:18:09.400156 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b22b3ef48b9b341b5f75972e87b8b11076efe795b474bcfc02ab3dc4fa3d4bf4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 10:18:09.400168 | orchestrator | skipping: [testbed-node-4] => (item={'id': '72c426cbc641bc8b2ed25d0820f8744453a269ae49e9c860301b088b2875d34a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 10:18:09.400200 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd20853db36c8c79e19f5c2dd10942fa51a847e3a5e97e6337728b092a8d1a7e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 10:18:09.400214 | orchestrator | ok: [testbed-node-4] => (item={'id': 'b77919badcb8f8ba280c98fb23d497a55f44933df17bb78168eb1bdad2e1a70a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-16 10:18:09.400226 | orchestrator | ok: [testbed-node-4] => (item={'id': '53da07a00053ebf9afb5ab821635b12ff4215bf97890d9d909868f9e9f9f8842', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-16 10:18:09.400238 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd53a1b0b6637040dccd8bd5fc6a1f31a3219a4e35a5001034427d592e0d57fe9', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.400249 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd1eb76e5299d83610fcf1f0fdee86fb9b5bb10b8e38721856f0bedcc02307454', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-16 10:18:09.400261 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fff79b3ee84da3512613340d55d16769f6d670c3caf59a94dbae6571b7474219', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-16 10:18:09.400294 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7fc599fd9363434102171d21f18e4fedcd5008b2944eefd2262d9bcfe4a07dd3', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.400309 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ade17cdb570d732c586cd4f8c164fa1919f007c8f72ac95bee66f2758ccbc96f', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.400344 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6e398fbd0fd7baa1e081604cf61747d2c90733cce9fb689576b2bc86b5858e89', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:09.400361 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8af0a2b859dcbb562920a07d3b24ba639da48260b717e1b16f95a3576070cbec', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20260328', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-16 10:18:09.400393 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ae7e0ec6ae2fa10e44ccff1ab6fc56107925265e969b2be1555639a382da9c4', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20260328', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 6 minutes'})  2026-04-16 10:18:09.400412 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'df390a0c44c28bbd6995ac293c440d808fbbe32f3e5534d9c8bb1d3f18b6f753', 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20260328', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 7 minutes'})  2026-04-16 10:18:09.400430 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f4f496fc120a7fbd8085e6d21bd89089eaa8f61c3e63001af08767686643c1d', 'image': 'registry.osism.tech/kolla/release/2025.1/ceilometer-compute:24.0.1.20260328', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 22 minutes'})  2026-04-16 10:18:09.400448 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea1de6e505c83d75dbf4eff7b884086dd0b55ac656891cad4e1c5408fcb77b93', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-compute:31.2.1.20260328', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 46 minutes (healthy)'})  2026-04-16 10:18:09.400480 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c04a042de1cced63de778a0aeef179bedb574e92edc443a5a2b58c2f104c4817', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-libvirt:10.0.0.20260328', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2026-04-16 10:18:09.400510 | orchestrator | skipping: [testbed-node-5] => (item={'id': '38c4e2434fa4df40d1f74a6fd0a8643ebf9cd87e0b50662d22809524f638a3e0', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-ssh:31.2.1.20260328', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 55 minutes (healthy)'})  2026-04-16 10:18:09.400529 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea3e0d698db6a28a96c97d89daadd92f6cf1116581d7340afd4f520a91fb9224', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20260328', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-04-16 10:18:09.400549 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e6ce3c953de0b86609e6408e7d6a40237463a6eb164c3855addd5b965506c762', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 10:18:09.400569 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b11de7427ed5b4c74eca25dcd675401c4075645789a5c1250f1b4febf546c8c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-04-16 10:18:09.400589 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9d3879a96379f7d6e3f94022ad70d00f09a80066a594b63c3ce14166cd697a25', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 2 hours'})  2026-04-16 10:18:09.400608 | orchestrator | ok: [testbed-node-5] => (item={'id': 'ab68f4aac413f6220bf547b8836ea34ad6b21c02e379ed68c9c2539434beca2f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-16 10:18:09.400636 | orchestrator | ok: [testbed-node-5] => (item={'id': 'f04252ce2b1519f0d0ad498c6b4df4625a335bebe3d8566bb62bb73b0cfc5e07', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 2 hours'}) 2026-04-16 10:18:46.295617 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a51ccaf0315414f0cdc88ec8b52297c2e23116f3808af7b73fa70e952252d095', 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20260328', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:46.295730 | orchestrator | skipping: [testbed-node-5] => (item={'id': '93eb22323327dfa970e7d6e5de344936d93945fca26bc5eaab579a95c82cf9f5', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20260328', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-16 10:18:46.295745 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'deb8f2f60031413192264890c06635d890acce0009a39b828d84962e2a0a9814', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20260328', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 3 hours (healthy)'})  2026-04-16 10:18:46.295761 | orchestrator | skipping: [testbed-node-5] => (item={'id': '557e8ee12db987c43b3204af078177e37ff10c009f7907c5de3ed98e83758247', 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20260328', 'name': '/cron', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:46.295783 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0a5e0d98c0330bdd4d8f00023bc0419df30929157b2be698d455db44491dc810', 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20260328', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:46.295838 | orchestrator | skipping: [testbed-node-5] => (item={'id': '30b545a720aed036e705c40bd43c2d2bbd0085aad90d1817b7232ab5d2a0dba8', 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.9.20260328', 'name': '/fluentd', 'state': 'running', 'status': 'Up 3 hours'})  2026-04-16 10:18:46.295856 | orchestrator | 2026-04-16 10:18:46.295874 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-04-16 10:18:46.295910 | orchestrator | Thursday 16 April 2026 10:18:10 +0000 (0:00:01.678) 0:00:17.520 ******** 2026-04-16 10:18:46.295926 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.295941 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.295956 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.295972 | orchestrator | 2026-04-16 10:18:46.295989 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-04-16 10:18:46.296006 | orchestrator | Thursday 16 April 2026 10:18:11 +0000 (0:00:01.326) 0:00:18.847 ******** 2026-04-16 10:18:46.296023 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296073 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:18:46.296089 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:18:46.296104 | orchestrator | 2026-04-16 10:18:46.296122 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-04-16 10:18:46.296133 | orchestrator | Thursday 16 April 2026 10:18:13 +0000 (0:00:01.372) 0:00:20.220 ******** 2026-04-16 10:18:46.296144 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.296155 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.296166 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.296177 | orchestrator | 2026-04-16 10:18:46.296187 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:18:46.296198 | orchestrator | Thursday 16 April 2026 10:18:14 +0000 (0:00:01.347) 0:00:21.567 ******** 2026-04-16 10:18:46.296209 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.296220 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.296231 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.296241 | orchestrator | 2026-04-16 10:18:46.296252 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-04-16 10:18:46.296263 | orchestrator | Thursday 16 April 2026 10:18:16 +0000 (0:00:01.524) 0:00:23.092 ******** 2026-04-16 10:18:46.296274 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-04-16 10:18:46.296286 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-04-16 10:18:46.296297 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296308 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-04-16 10:18:46.296319 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-04-16 10:18:46.296330 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:18:46.296340 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-04-16 10:18:46.296351 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-04-16 10:18:46.296362 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:18:46.296373 | orchestrator | 2026-04-16 10:18:46.296384 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-04-16 10:18:46.296394 | orchestrator | Thursday 16 April 2026 10:18:17 +0000 (0:00:01.300) 0:00:24.393 ******** 2026-04-16 10:18:46.296405 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.296416 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.296426 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.296437 | orchestrator | 2026-04-16 10:18:46.296448 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-16 10:18:46.296459 | orchestrator | Thursday 16 April 2026 10:18:18 +0000 (0:00:01.373) 0:00:25.766 ******** 2026-04-16 10:18:46.296496 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296536 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:18:46.296554 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:18:46.296570 | orchestrator | 2026-04-16 10:18:46.296585 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-04-16 10:18:46.296602 | orchestrator | Thursday 16 April 2026 10:18:20 +0000 (0:00:01.339) 0:00:27.106 ******** 2026-04-16 10:18:46.296617 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296634 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:18:46.296652 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:18:46.296668 | orchestrator | 2026-04-16 10:18:46.296684 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-04-16 10:18:46.296701 | orchestrator | Thursday 16 April 2026 10:18:21 +0000 (0:00:01.349) 0:00:28.456 ******** 2026-04-16 10:18:46.296711 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.296721 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.296731 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.296740 | orchestrator | 2026-04-16 10:18:46.296749 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 10:18:46.296759 | orchestrator | Thursday 16 April 2026 10:18:23 +0000 (0:00:01.767) 0:00:30.224 ******** 2026-04-16 10:18:46.296769 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296778 | orchestrator | 2026-04-16 10:18:46.296788 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 10:18:46.296798 | orchestrator | Thursday 16 April 2026 10:18:24 +0000 (0:00:01.293) 0:00:31.517 ******** 2026-04-16 10:18:46.296807 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296817 | orchestrator | 2026-04-16 10:18:46.296826 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 10:18:46.296836 | orchestrator | Thursday 16 April 2026 10:18:25 +0000 (0:00:01.248) 0:00:32.766 ******** 2026-04-16 10:18:46.296845 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.296855 | orchestrator | 2026-04-16 10:18:46.296864 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:18:46.296874 | orchestrator | Thursday 16 April 2026 10:18:27 +0000 (0:00:01.269) 0:00:34.035 ******** 2026-04-16 10:18:46.296883 | orchestrator | 2026-04-16 10:18:46.296893 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:18:46.296902 | orchestrator | Thursday 16 April 2026 10:18:27 +0000 (0:00:00.466) 0:00:34.502 ******** 2026-04-16 10:18:46.296911 | orchestrator | 2026-04-16 10:18:46.296921 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:18:46.296930 | orchestrator | Thursday 16 April 2026 10:18:28 +0000 (0:00:00.612) 0:00:35.115 ******** 2026-04-16 10:18:46.296940 | orchestrator | 2026-04-16 10:18:46.296959 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 10:18:46.296976 | orchestrator | Thursday 16 April 2026 10:18:28 +0000 (0:00:00.788) 0:00:35.903 ******** 2026-04-16 10:18:46.297002 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.297019 | orchestrator | 2026-04-16 10:18:46.297058 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-04-16 10:18:46.297076 | orchestrator | Thursday 16 April 2026 10:18:30 +0000 (0:00:01.270) 0:00:37.173 ******** 2026-04-16 10:18:46.297090 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.297108 | orchestrator | 2026-04-16 10:18:46.297125 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:18:46.297141 | orchestrator | Thursday 16 April 2026 10:18:31 +0000 (0:00:01.318) 0:00:38.492 ******** 2026-04-16 10:18:46.297156 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.297174 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.297190 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.297206 | orchestrator | 2026-04-16 10:18:46.297221 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-04-16 10:18:46.297232 | orchestrator | Thursday 16 April 2026 10:18:32 +0000 (0:00:01.346) 0:00:39.839 ******** 2026-04-16 10:18:46.297252 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.297261 | orchestrator | 2026-04-16 10:18:46.297271 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-04-16 10:18:46.297281 | orchestrator | Thursday 16 April 2026 10:18:34 +0000 (0:00:01.299) 0:00:41.138 ******** 2026-04-16 10:18:46.297291 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-04-16 10:18:46.297300 | orchestrator | 2026-04-16 10:18:46.297310 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-04-16 10:18:46.297320 | orchestrator | Thursday 16 April 2026 10:18:37 +0000 (0:00:03.386) 0:00:44.525 ******** 2026-04-16 10:18:46.297329 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.297339 | orchestrator | 2026-04-16 10:18:46.297349 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-04-16 10:18:46.297358 | orchestrator | Thursday 16 April 2026 10:18:38 +0000 (0:00:01.114) 0:00:45.640 ******** 2026-04-16 10:18:46.297368 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.297378 | orchestrator | 2026-04-16 10:18:46.297387 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-04-16 10:18:46.297397 | orchestrator | Thursday 16 April 2026 10:18:39 +0000 (0:00:01.281) 0:00:46.922 ******** 2026-04-16 10:18:46.297406 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:18:46.297416 | orchestrator | 2026-04-16 10:18:46.297426 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-04-16 10:18:46.297435 | orchestrator | Thursday 16 April 2026 10:18:40 +0000 (0:00:01.089) 0:00:48.012 ******** 2026-04-16 10:18:46.297445 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.297455 | orchestrator | 2026-04-16 10:18:46.297465 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:18:46.297474 | orchestrator | Thursday 16 April 2026 10:18:42 +0000 (0:00:01.104) 0:00:49.116 ******** 2026-04-16 10:18:46.297484 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:18:46.297494 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:18:46.297503 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:18:46.297513 | orchestrator | 2026-04-16 10:18:46.297522 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-04-16 10:18:46.297532 | orchestrator | Thursday 16 April 2026 10:18:43 +0000 (0:00:01.332) 0:00:50.449 ******** 2026-04-16 10:18:46.297542 | orchestrator | changed: [testbed-node-3] 2026-04-16 10:18:46.297551 | orchestrator | changed: [testbed-node-4] 2026-04-16 10:18:46.297572 | orchestrator | changed: [testbed-node-5] 2026-04-16 10:19:17.778778 | orchestrator | 2026-04-16 10:19:17.778910 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-04-16 10:19:17.778941 | orchestrator | Thursday 16 April 2026 10:18:47 +0000 (0:00:03.948) 0:00:54.397 ******** 2026-04-16 10:19:17.778961 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.778980 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.778998 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779017 | orchestrator | 2026-04-16 10:19:17.779035 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-04-16 10:19:17.779055 | orchestrator | Thursday 16 April 2026 10:18:48 +0000 (0:00:01.375) 0:00:55.772 ******** 2026-04-16 10:19:17.779074 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.779120 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.779132 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779143 | orchestrator | 2026-04-16 10:19:17.779160 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-04-16 10:19:17.779187 | orchestrator | Thursday 16 April 2026 10:18:50 +0000 (0:00:01.495) 0:00:57.268 ******** 2026-04-16 10:19:17.779209 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:19:17.779228 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:19:17.779266 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:19:17.779301 | orchestrator | 2026-04-16 10:19:17.779320 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-04-16 10:19:17.779336 | orchestrator | Thursday 16 April 2026 10:18:51 +0000 (0:00:01.331) 0:00:58.600 ******** 2026-04-16 10:19:17.779373 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.779385 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.779398 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779410 | orchestrator | 2026-04-16 10:19:17.779421 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-04-16 10:19:17.779432 | orchestrator | Thursday 16 April 2026 10:18:52 +0000 (0:00:01.334) 0:00:59.934 ******** 2026-04-16 10:19:17.779443 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:19:17.779454 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:19:17.779464 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:19:17.779475 | orchestrator | 2026-04-16 10:19:17.779486 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-04-16 10:19:17.779496 | orchestrator | Thursday 16 April 2026 10:18:54 +0000 (0:00:01.490) 0:01:01.424 ******** 2026-04-16 10:19:17.779507 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:19:17.779518 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:19:17.779528 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:19:17.779539 | orchestrator | 2026-04-16 10:19:17.779550 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-04-16 10:19:17.779574 | orchestrator | Thursday 16 April 2026 10:18:55 +0000 (0:00:01.414) 0:01:02.839 ******** 2026-04-16 10:19:17.779586 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.779597 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.779607 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779618 | orchestrator | 2026-04-16 10:19:17.779628 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-04-16 10:19:17.779639 | orchestrator | Thursday 16 April 2026 10:18:57 +0000 (0:00:01.544) 0:01:04.383 ******** 2026-04-16 10:19:17.779650 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.779660 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.779671 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779681 | orchestrator | 2026-04-16 10:19:17.779692 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-04-16 10:19:17.779704 | orchestrator | Thursday 16 April 2026 10:18:58 +0000 (0:00:01.559) 0:01:05.942 ******** 2026-04-16 10:19:17.779715 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.779726 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.779737 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779756 | orchestrator | 2026-04-16 10:19:17.779774 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-04-16 10:19:17.779792 | orchestrator | Thursday 16 April 2026 10:19:00 +0000 (0:00:01.556) 0:01:07.499 ******** 2026-04-16 10:19:17.779808 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:19:17.779826 | orchestrator | skipping: [testbed-node-4] 2026-04-16 10:19:17.779846 | orchestrator | skipping: [testbed-node-5] 2026-04-16 10:19:17.779864 | orchestrator | 2026-04-16 10:19:17.779882 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-04-16 10:19:17.779899 | orchestrator | Thursday 16 April 2026 10:19:01 +0000 (0:00:01.403) 0:01:08.903 ******** 2026-04-16 10:19:17.779910 | orchestrator | ok: [testbed-node-3] 2026-04-16 10:19:17.779921 | orchestrator | ok: [testbed-node-4] 2026-04-16 10:19:17.779931 | orchestrator | ok: [testbed-node-5] 2026-04-16 10:19:17.779942 | orchestrator | 2026-04-16 10:19:17.779953 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-04-16 10:19:17.779964 | orchestrator | Thursday 16 April 2026 10:19:03 +0000 (0:00:01.306) 0:01:10.210 ******** 2026-04-16 10:19:17.779974 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:19:17.779985 | orchestrator | 2026-04-16 10:19:17.779996 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-04-16 10:19:17.780007 | orchestrator | Thursday 16 April 2026 10:19:04 +0000 (0:00:01.243) 0:01:11.454 ******** 2026-04-16 10:19:17.780018 | orchestrator | skipping: [testbed-node-3] 2026-04-16 10:19:17.780028 | orchestrator | 2026-04-16 10:19:17.780039 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-04-16 10:19:17.780050 | orchestrator | Thursday 16 April 2026 10:19:05 +0000 (0:00:01.491) 0:01:12.946 ******** 2026-04-16 10:19:17.780071 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:19:17.780082 | orchestrator | 2026-04-16 10:19:17.780141 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-04-16 10:19:17.780152 | orchestrator | Thursday 16 April 2026 10:19:08 +0000 (0:00:02.935) 0:01:15.881 ******** 2026-04-16 10:19:17.780162 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:19:17.780173 | orchestrator | 2026-04-16 10:19:17.780184 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-04-16 10:19:17.780195 | orchestrator | Thursday 16 April 2026 10:19:10 +0000 (0:00:01.265) 0:01:17.147 ******** 2026-04-16 10:19:17.780206 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:19:17.780216 | orchestrator | 2026-04-16 10:19:17.780249 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:19:17.780261 | orchestrator | Thursday 16 April 2026 10:19:11 +0000 (0:00:01.267) 0:01:18.415 ******** 2026-04-16 10:19:17.780272 | orchestrator | 2026-04-16 10:19:17.780283 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:19:17.780293 | orchestrator | Thursday 16 April 2026 10:19:11 +0000 (0:00:00.426) 0:01:18.841 ******** 2026-04-16 10:19:17.780304 | orchestrator | 2026-04-16 10:19:17.780314 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-04-16 10:19:17.780325 | orchestrator | Thursday 16 April 2026 10:19:12 +0000 (0:00:00.449) 0:01:19.290 ******** 2026-04-16 10:19:17.780336 | orchestrator | 2026-04-16 10:19:17.780347 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-04-16 10:19:17.780358 | orchestrator | Thursday 16 April 2026 10:19:13 +0000 (0:00:00.810) 0:01:20.101 ******** 2026-04-16 10:19:17.780368 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-04-16 10:19:17.780379 | orchestrator | 2026-04-16 10:19:17.780390 | orchestrator | TASK [Print report file information] ******************************************* 2026-04-16 10:19:17.780400 | orchestrator | Thursday 16 April 2026 10:19:15 +0000 (0:00:02.276) 0:01:22.378 ******** 2026-04-16 10:19:17.780411 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-04-16 10:19:17.780422 | orchestrator |  "msg": [ 2026-04-16 10:19:17.780433 | orchestrator |  "Validator run completed.", 2026-04-16 10:19:17.780444 | orchestrator |  "You can find the report file here:", 2026-04-16 10:19:17.780455 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-04-16T10:17:55+00:00-report.json", 2026-04-16 10:19:17.780467 | orchestrator |  "on the following host:", 2026-04-16 10:19:17.780478 | orchestrator |  "testbed-manager" 2026-04-16 10:19:17.780489 | orchestrator |  ] 2026-04-16 10:19:17.780500 | orchestrator | } 2026-04-16 10:19:17.780511 | orchestrator | 2026-04-16 10:19:17.780522 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:19:17.780534 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-04-16 10:19:17.780546 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 10:19:17.780563 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-04-16 10:19:17.780575 | orchestrator | 2026-04-16 10:19:17.780585 | orchestrator | 2026-04-16 10:19:17.780596 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:19:17.780607 | orchestrator | Thursday 16 April 2026 10:19:17 +0000 (0:00:02.001) 0:01:24.379 ******** 2026-04-16 10:19:17.780618 | orchestrator | =============================================================================== 2026-04-16 10:19:17.780628 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 3.95s 2026-04-16 10:19:17.780645 | orchestrator | Get ceph osd tree ------------------------------------------------------- 3.39s 2026-04-16 10:19:17.780656 | orchestrator | Aggregate test results step one ----------------------------------------- 2.94s 2026-04-16 10:19:17.780667 | orchestrator | Write report file ------------------------------------------------------- 2.28s 2026-04-16 10:19:17.780678 | orchestrator | Get timestamp for report file ------------------------------------------- 2.23s 2026-04-16 10:19:17.780688 | orchestrator | Print report file information ------------------------------------------- 2.00s 2026-04-16 10:19:17.780699 | orchestrator | Calculate OSD devices for each host ------------------------------------- 1.87s 2026-04-16 10:19:17.780710 | orchestrator | Flush handlers ---------------------------------------------------------- 1.87s 2026-04-16 10:19:17.780721 | orchestrator | Set test result to passed if all containers are running ----------------- 1.77s 2026-04-16 10:19:17.780731 | orchestrator | Flush handlers ---------------------------------------------------------- 1.69s 2026-04-16 10:19:17.780742 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 1.68s 2026-04-16 10:19:17.780753 | orchestrator | Create report output directory ------------------------------------------ 1.56s 2026-04-16 10:19:17.780763 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 1.56s 2026-04-16 10:19:17.780774 | orchestrator | Calculate sub test expression results ----------------------------------- 1.56s 2026-04-16 10:19:17.780785 | orchestrator | Prepare test data ------------------------------------------------------- 1.54s 2026-04-16 10:19:17.780796 | orchestrator | Prepare test data ------------------------------------------------------- 1.53s 2026-04-16 10:19:17.780807 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 1.50s 2026-04-16 10:19:17.780817 | orchestrator | Set validation result to failed if a test failed ------------------------ 1.49s 2026-04-16 10:19:17.780828 | orchestrator | Fail if count of unencrypted OSDs does not match ------------------------ 1.49s 2026-04-16 10:19:17.780839 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 1.41s 2026-04-16 10:19:17.954204 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-04-16 10:19:17.960701 | orchestrator | + set -e 2026-04-16 10:19:17.960783 | orchestrator | + source /opt/manager-vars.sh 2026-04-16 10:19:17.960797 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-04-16 10:19:17.960808 | orchestrator | ++ NUMBER_OF_NODES=6 2026-04-16 10:19:17.960818 | orchestrator | ++ export CEPH_VERSION=reef 2026-04-16 10:19:17.960828 | orchestrator | ++ CEPH_VERSION=reef 2026-04-16 10:19:17.960838 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-04-16 10:19:17.960848 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-04-16 10:19:17.960858 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-04-16 10:19:17.960868 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-04-16 10:19:17.960878 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-04-16 10:19:17.960888 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-04-16 10:19:17.960898 | orchestrator | ++ export ARA=false 2026-04-16 10:19:17.960908 | orchestrator | ++ ARA=false 2026-04-16 10:19:17.960918 | orchestrator | ++ export DEPLOY_MODE=manager 2026-04-16 10:19:17.960927 | orchestrator | ++ DEPLOY_MODE=manager 2026-04-16 10:19:17.960937 | orchestrator | ++ export TEMPEST=false 2026-04-16 10:19:17.960947 | orchestrator | ++ TEMPEST=false 2026-04-16 10:19:17.960957 | orchestrator | ++ export IS_ZUUL=true 2026-04-16 10:19:17.960966 | orchestrator | ++ IS_ZUUL=true 2026-04-16 10:19:17.960976 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 10:19:17.960986 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2026-04-16 10:19:17.960996 | orchestrator | ++ export EXTERNAL_API=false 2026-04-16 10:19:17.961005 | orchestrator | ++ EXTERNAL_API=false 2026-04-16 10:19:17.961015 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-04-16 10:19:17.961025 | orchestrator | ++ IMAGE_USER=ubuntu 2026-04-16 10:19:17.961034 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-04-16 10:19:17.961044 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-04-16 10:19:17.961054 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-04-16 10:19:17.961064 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-04-16 10:19:17.961073 | orchestrator | ++ export RABBITMQ3TO4=true 2026-04-16 10:19:17.961083 | orchestrator | ++ RABBITMQ3TO4=true 2026-04-16 10:19:17.961118 | orchestrator | + source /etc/os-release 2026-04-16 10:19:17.961128 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-04-16 10:19:17.961138 | orchestrator | ++ NAME=Ubuntu 2026-04-16 10:19:17.961171 | orchestrator | ++ VERSION_ID=24.04 2026-04-16 10:19:17.961182 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-04-16 10:19:17.961192 | orchestrator | ++ VERSION_CODENAME=noble 2026-04-16 10:19:17.961201 | orchestrator | ++ ID=ubuntu 2026-04-16 10:19:17.961211 | orchestrator | ++ ID_LIKE=debian 2026-04-16 10:19:17.961221 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-04-16 10:19:17.961231 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-04-16 10:19:17.961240 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-04-16 10:19:17.961251 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-04-16 10:19:17.961262 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-04-16 10:19:17.961274 | orchestrator | ++ LOGO=ubuntu-logo 2026-04-16 10:19:17.961284 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-04-16 10:19:17.961296 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-04-16 10:19:17.961308 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-04-16 10:19:17.981792 | orchestrator | 2026-04-16 10:19:17.981869 | orchestrator | # Status of Elasticsearch 2026-04-16 10:19:17.981880 | orchestrator | 2026-04-16 10:19:17.981888 | orchestrator | + pushd /opt/configuration/contrib 2026-04-16 10:19:17.981896 | orchestrator | + echo 2026-04-16 10:19:17.981903 | orchestrator | + echo '# Status of Elasticsearch' 2026-04-16 10:19:17.981910 | orchestrator | + echo 2026-04-16 10:19:17.981917 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-04-16 10:19:18.155021 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-04-16 10:19:18.155228 | orchestrator | 2026-04-16 10:19:18.155258 | orchestrator | # Status of MariaDB 2026-04-16 10:19:18.155270 | orchestrator | 2026-04-16 10:19:18.155281 | orchestrator | + echo 2026-04-16 10:19:18.155291 | orchestrator | + echo '# Status of MariaDB' 2026-04-16 10:19:18.155301 | orchestrator | + echo 2026-04-16 10:19:18.155742 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-16 10:19:18.198703 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 10:19:18.198821 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 10:19:18.198847 | orchestrator | + MARIADB_USER=root_shard_0 2026-04-16 10:19:18.198868 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-04-16 10:19:18.360641 | orchestrator | Reading package lists... 2026-04-16 10:19:18.726412 | orchestrator | Building dependency tree... 2026-04-16 10:19:18.728435 | orchestrator | Reading state information... 2026-04-16 10:19:19.149591 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-04-16 10:19:19.149719 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2026-04-16 10:19:19.829370 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-04-16 10:19:19.830072 | orchestrator | 2026-04-16 10:19:19.830123 | orchestrator | # Status of Prometheus 2026-04-16 10:19:19.830131 | orchestrator | 2026-04-16 10:19:19.830135 | orchestrator | + echo 2026-04-16 10:19:19.830139 | orchestrator | + echo '# Status of Prometheus' 2026-04-16 10:19:19.830144 | orchestrator | + echo 2026-04-16 10:19:19.830149 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-04-16 10:19:19.880165 | orchestrator | Unauthorized 2026-04-16 10:19:19.882389 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-04-16 10:19:19.956285 | orchestrator | Unauthorized 2026-04-16 10:19:19.960311 | orchestrator | 2026-04-16 10:19:19.960428 | orchestrator | # Status of RabbitMQ 2026-04-16 10:19:19.960445 | orchestrator | 2026-04-16 10:19:19.960458 | orchestrator | + echo 2026-04-16 10:19:19.960470 | orchestrator | + echo '# Status of RabbitMQ' 2026-04-16 10:19:19.960481 | orchestrator | + echo 2026-04-16 10:19:19.960493 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-04-16 10:19:20.008335 | orchestrator | + [[ -1 -ge 0 ]] 2026-04-16 10:19:20.008442 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-04-16 10:19:20.008454 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-04-16 10:19:20.483492 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-04-16 10:19:20.494128 | orchestrator | 2026-04-16 10:19:20.494284 | orchestrator | # Status of Redis 2026-04-16 10:19:20.494311 | orchestrator | 2026-04-16 10:19:20.494373 | orchestrator | + echo 2026-04-16 10:19:20.494395 | orchestrator | + echo '# Status of Redis' 2026-04-16 10:19:20.494416 | orchestrator | + echo 2026-04-16 10:19:20.494438 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-04-16 10:19:20.505426 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001891s;;;0.000000;10.000000 2026-04-16 10:19:20.505561 | orchestrator | 2026-04-16 10:19:20.505583 | orchestrator | # Create backup of MariaDB database 2026-04-16 10:19:20.505604 | orchestrator | 2026-04-16 10:19:20.505623 | orchestrator | + popd 2026-04-16 10:19:20.505642 | orchestrator | + echo 2026-04-16 10:19:20.505693 | orchestrator | + echo '# Create backup of MariaDB database' 2026-04-16 10:19:20.505713 | orchestrator | + echo 2026-04-16 10:19:20.505734 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-04-16 10:19:21.795355 | orchestrator | 2026-04-16 10:19:21 | INFO  | Prepare task for execution of mariadb_backup. 2026-04-16 10:19:21.859497 | orchestrator | 2026-04-16 10:19:21 | INFO  | Task d4aeb0f9-df6f-4cde-aedb-067901c3e1b0 (mariadb_backup) was prepared for execution. 2026-04-16 10:19:21.859841 | orchestrator | 2026-04-16 10:19:21 | INFO  | It takes a moment until task d4aeb0f9-df6f-4cde-aedb-067901c3e1b0 (mariadb_backup) has been started and output is visible here. 2026-04-16 10:19:56.899187 | orchestrator | 2026-04-16 10:19:56.899326 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-04-16 10:19:56.899342 | orchestrator | 2026-04-16 10:19:56.899351 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-04-16 10:19:56.899361 | orchestrator | Thursday 16 April 2026 10:19:26 +0000 (0:00:01.623) 0:00:01.623 ******** 2026-04-16 10:19:56.899371 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:19:56.899381 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:19:56.899390 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:19:56.899399 | orchestrator | 2026-04-16 10:19:56.899408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-04-16 10:19:56.899417 | orchestrator | Thursday 16 April 2026 10:19:28 +0000 (0:00:01.718) 0:00:03.341 ******** 2026-04-16 10:19:56.899426 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-04-16 10:19:56.899435 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-04-16 10:19:56.899444 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-04-16 10:19:56.899453 | orchestrator | 2026-04-16 10:19:56.899462 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-04-16 10:19:56.899471 | orchestrator | 2026-04-16 10:19:56.899480 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-04-16 10:19:56.899489 | orchestrator | Thursday 16 April 2026 10:19:30 +0000 (0:00:01.743) 0:00:05.084 ******** 2026-04-16 10:19:56.899497 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-04-16 10:19:56.899506 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-04-16 10:19:56.899520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-04-16 10:19:56.899534 | orchestrator | 2026-04-16 10:19:56.899548 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-04-16 10:19:56.899562 | orchestrator | Thursday 16 April 2026 10:19:31 +0000 (0:00:01.458) 0:00:06.542 ******** 2026-04-16 10:19:56.899576 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-04-16 10:19:56.899590 | orchestrator | 2026-04-16 10:19:56.899604 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-04-16 10:19:56.899618 | orchestrator | Thursday 16 April 2026 10:19:34 +0000 (0:00:02.429) 0:00:08.971 ******** 2026-04-16 10:19:56.899632 | orchestrator | ok: [testbed-node-2] 2026-04-16 10:19:56.899645 | orchestrator | ok: [testbed-node-0] 2026-04-16 10:19:56.899660 | orchestrator | ok: [testbed-node-1] 2026-04-16 10:19:56.899674 | orchestrator | 2026-04-16 10:19:56.899689 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-04-16 10:19:56.899727 | orchestrator | Thursday 16 April 2026 10:19:38 +0000 (0:00:04.335) 0:00:13.307 ******** 2026-04-16 10:19:56.899738 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:19:56.899749 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:19:56.899759 | orchestrator | changed: [testbed-node-0] 2026-04-16 10:19:56.899770 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-04-16 10:19:56.899780 | orchestrator | 2026-04-16 10:19:56.899790 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-04-16 10:19:56.899812 | orchestrator | skipping: no hosts matched 2026-04-16 10:19:56.899823 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-04-16 10:19:56.899833 | orchestrator | 2026-04-16 10:19:56.899843 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-04-16 10:19:56.899853 | orchestrator | skipping: no hosts matched 2026-04-16 10:19:56.899863 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-04-16 10:19:56.899873 | orchestrator | mariadb_bootstrap_restart 2026-04-16 10:19:56.899882 | orchestrator | 2026-04-16 10:19:56.899891 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-04-16 10:19:56.899900 | orchestrator | skipping: no hosts matched 2026-04-16 10:19:56.899908 | orchestrator | 2026-04-16 10:19:56.899917 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-04-16 10:19:56.899926 | orchestrator | 2026-04-16 10:19:56.899934 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-04-16 10:19:56.899943 | orchestrator | Thursday 16 April 2026 10:19:53 +0000 (0:00:15.043) 0:00:28.350 ******** 2026-04-16 10:19:56.899952 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:19:56.899960 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:19:56.899969 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:19:56.899978 | orchestrator | 2026-04-16 10:19:56.899987 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-04-16 10:19:56.899995 | orchestrator | Thursday 16 April 2026 10:19:54 +0000 (0:00:01.377) 0:00:29.728 ******** 2026-04-16 10:19:56.900004 | orchestrator | skipping: [testbed-node-0] 2026-04-16 10:19:56.900013 | orchestrator | skipping: [testbed-node-1] 2026-04-16 10:19:56.900021 | orchestrator | skipping: [testbed-node-2] 2026-04-16 10:19:56.900030 | orchestrator | 2026-04-16 10:19:56.900039 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:19:56.900048 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-04-16 10:19:56.900059 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 10:19:56.900068 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-04-16 10:19:56.900077 | orchestrator | 2026-04-16 10:19:56.900085 | orchestrator | 2026-04-16 10:19:56.900094 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:19:56.900103 | orchestrator | Thursday 16 April 2026 10:19:56 +0000 (0:00:01.786) 0:00:31.515 ******** 2026-04-16 10:19:56.900111 | orchestrator | =============================================================================== 2026-04-16 10:19:56.900120 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 15.04s 2026-04-16 10:19:56.900147 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 4.34s 2026-04-16 10:19:56.900183 | orchestrator | mariadb : include_tasks ------------------------------------------------- 2.43s 2026-04-16 10:19:56.900199 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 1.79s 2026-04-16 10:19:56.900209 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.74s 2026-04-16 10:19:56.900217 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.72s 2026-04-16 10:19:56.900235 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 1.46s 2026-04-16 10:19:56.900244 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 1.38s 2026-04-16 10:19:57.077646 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-04-16 10:19:57.085993 | orchestrator | + set -e 2026-04-16 10:19:57.086190 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-04-16 10:19:57.086212 | orchestrator | ++ export INTERACTIVE=false 2026-04-16 10:19:57.086225 | orchestrator | ++ INTERACTIVE=false 2026-04-16 10:19:57.086236 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-04-16 10:19:57.086247 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-04-16 10:19:57.086258 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-04-16 10:19:57.086906 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-04-16 10:19:57.093499 | orchestrator | 2026-04-16 10:19:57.093588 | orchestrator | # OpenStack endpoints 2026-04-16 10:19:57.093603 | orchestrator | 2026-04-16 10:19:57.093614 | orchestrator | ++ export MANAGER_VERSION=10.0.0 2026-04-16 10:19:57.093626 | orchestrator | ++ MANAGER_VERSION=10.0.0 2026-04-16 10:19:57.093637 | orchestrator | + export OS_CLOUD=admin 2026-04-16 10:19:57.093648 | orchestrator | + OS_CLOUD=admin 2026-04-16 10:19:57.093660 | orchestrator | + echo 2026-04-16 10:19:57.093670 | orchestrator | + echo '# OpenStack endpoints' 2026-04-16 10:19:57.093688 | orchestrator | + echo 2026-04-16 10:19:57.093705 | orchestrator | + openstack endpoint list 2026-04-16 10:20:00.129822 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-16 10:20:00.129951 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-04-16 10:20:00.129979 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-16 10:20:00.129998 | orchestrator | | 0500898b72494e67a66527e83d465096 | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-04-16 10:20:00.130010 | orchestrator | | 069306ea675441f789f76d67eb2156ec | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-16 10:20:00.130146 | orchestrator | | 06b6f8d54b0f440e95330358184df545 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-04-16 10:20:00.130195 | orchestrator | | 0ad37a9ae5b146589bfb298dd2604f63 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-04-16 10:20:00.130216 | orchestrator | | 0ad7bb9c5cd14714b50de15febf4e89a | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-04-16 10:20:00.130235 | orchestrator | | 0db8f7b3aaff4c0299025e8ad000e199 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-04-16 10:20:00.130254 | orchestrator | | 184993b154bf4afaab303323783c2473 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-04-16 10:20:00.130270 | orchestrator | | 2074199def2442bd8b59082bca6dcb42 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-04-16 10:20:00.130281 | orchestrator | | 5b33b192d06d4d4d83f474af33e226e3 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-04-16 10:20:00.130292 | orchestrator | | 680229dfa0c646129ec105f6c0a5ab93 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-04-16 10:20:00.130304 | orchestrator | | 6bff20ec606d42b29783c544cba87411 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-04-16 10:20:00.130348 | orchestrator | | 8b25d74e20354c53876924282c47c3fb | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-04-16 10:20:00.130362 | orchestrator | | 8db78846009e487595dbd676fd1491dc | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-04-16 10:20:00.130374 | orchestrator | | 8e6509a9730c4f6f9466e439e43a06d3 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-04-16 10:20:00.130387 | orchestrator | | 925bc22b4a354eee870c2b33096a30f4 | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-16 10:20:00.130399 | orchestrator | | 9da1f316b8e449ffb4e585f47af769c6 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-16 10:20:00.130411 | orchestrator | | 9ecc93099a314468be4fe875c86fff2b | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-04-16 10:20:00.130424 | orchestrator | | a2416aa55be84af3b9ae50b9db7754d0 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-04-16 10:20:00.130436 | orchestrator | | a2acda1f26584568afabe0d9f2e7881f | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-04-16 10:20:00.130449 | orchestrator | | a5d8f1232cb74105ac18c8819fb2d9fb | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-04-16 10:20:00.130482 | orchestrator | | a839c31795a94e2392667e82fe207479 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-04-16 10:20:00.130496 | orchestrator | | b6c7afe174fc4a6bba7907bf72a85ae6 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-04-16 10:20:00.130508 | orchestrator | | b9ad8886dca44a258d94e1fd863fd072 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-04-16 10:20:00.130522 | orchestrator | | c6a57eb8e89146a3af49f300a8b56ef0 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-04-16 10:20:00.130535 | orchestrator | | c6def6e9228f4414b68b144c416ac6a2 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-04-16 10:20:00.130548 | orchestrator | | cbab18a249704eb99fcca0c34d4e7415 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-04-16 10:20:00.130567 | orchestrator | | d5ef7b864e234a7e88c9937bc419e2ee | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-04-16 10:20:00.130580 | orchestrator | | ddfc02f6910f4d718509571043b2decb | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-04-16 10:20:00.130592 | orchestrator | | e85e073e0e23418f9ee74af3f3730d51 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-04-16 10:20:00.130604 | orchestrator | | fde1fd9406dc40bdbf82693b39380388 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-04-16 10:20:00.130617 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-04-16 10:20:00.367596 | orchestrator | 2026-04-16 10:20:00.367686 | orchestrator | # Cinder 2026-04-16 10:20:00.367699 | orchestrator | 2026-04-16 10:20:00.367709 | orchestrator | + echo 2026-04-16 10:20:00.367718 | orchestrator | + echo '# Cinder' 2026-04-16 10:20:00.367727 | orchestrator | + echo 2026-04-16 10:20:00.367737 | orchestrator | + openstack volume service list 2026-04-16 10:20:02.893145 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-16 10:20:02.893329 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-04-16 10:20:02.893350 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-16 10:20:02.893367 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-16T10:20:01.000000 | 2026-04-16 10:20:02.893383 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-16T10:20:00.000000 | 2026-04-16 10:20:02.893400 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-16T10:20:01.000000 | 2026-04-16 10:20:02.893416 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-04-16T10:19:59.000000 | 2026-04-16 10:20:02.893431 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-04-16T10:20:00.000000 | 2026-04-16 10:20:02.893446 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-04-16T10:20:00.000000 | 2026-04-16 10:20:02.893462 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-04-16T10:19:53.000000 | 2026-04-16 10:20:02.893478 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-04-16T10:19:56.000000 | 2026-04-16 10:20:02.893494 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-04-16T10:19:56.000000 | 2026-04-16 10:20:02.893510 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-04-16 10:20:03.117470 | orchestrator | 2026-04-16 10:20:03.117568 | orchestrator | # Neutron 2026-04-16 10:20:03.117584 | orchestrator | 2026-04-16 10:20:03.117596 | orchestrator | + echo 2026-04-16 10:20:03.117608 | orchestrator | + echo '# Neutron' 2026-04-16 10:20:03.117621 | orchestrator | + echo 2026-04-16 10:20:03.117632 | orchestrator | + openstack network agent list 2026-04-16 10:20:05.875895 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-16 10:20:05.875989 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-04-16 10:20:05.876002 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-16 10:20:05.876012 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-16 10:20:05.876021 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-04-16 10:20:05.876030 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-04-16 10:20:05.876038 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-04-16 10:20:05.876047 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-16 10:20:05.876056 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-04-16 10:20:05.876087 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-04-16 10:20:05.876096 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-04-16 10:20:05.876117 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-04-16 10:20:05.876127 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-04-16 10:20:06.112791 | orchestrator | + openstack network service provider list 2026-04-16 10:20:08.587379 | orchestrator | +---------------+------+---------+ 2026-04-16 10:20:08.587514 | orchestrator | | Service Type | Name | Default | 2026-04-16 10:20:08.587541 | orchestrator | +---------------+------+---------+ 2026-04-16 10:20:08.587562 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-04-16 10:20:08.587580 | orchestrator | +---------------+------+---------+ 2026-04-16 10:20:08.829369 | orchestrator | 2026-04-16 10:20:08.829493 | orchestrator | # Nova 2026-04-16 10:20:08.829514 | orchestrator | 2026-04-16 10:20:08.829527 | orchestrator | + echo 2026-04-16 10:20:08.829541 | orchestrator | + echo '# Nova' 2026-04-16 10:20:08.829557 | orchestrator | + echo 2026-04-16 10:20:08.829572 | orchestrator | + openstack compute service list 2026-04-16 10:20:11.523919 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-16 10:20:11.524022 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-04-16 10:20:11.524035 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-16 10:20:11.524045 | orchestrator | | ff8b8457-122f-4eac-a7b0-3c42b7a5c514 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-04-16T10:20:11.000000 | 2026-04-16 10:20:11.524053 | orchestrator | | 6d603e4e-11e7-4f03-b5c1-46c85270c3ab | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-04-16T10:20:11.000000 | 2026-04-16 10:20:11.524061 | orchestrator | | c0e02b1f-ee70-4e3e-aaf6-7815f95e1c5c | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-04-16T10:20:11.000000 | 2026-04-16 10:20:11.524070 | orchestrator | | 81d64790-9d7b-46cc-bd1f-bea049f89bba | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-04-16T10:20:03.000000 | 2026-04-16 10:20:11.524078 | orchestrator | | e558ce6e-5d14-4afe-bcd7-2cc8ab725d20 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-04-16T10:20:07.000000 | 2026-04-16 10:20:11.524086 | orchestrator | | 2bafa520-39e5-4883-986a-1fb1473a54ce | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-04-16T10:20:06.000000 | 2026-04-16 10:20:11.524094 | orchestrator | | 0e0a26f6-ed62-4403-b6da-b173c985f82a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-04-16T10:20:02.000000 | 2026-04-16 10:20:11.524101 | orchestrator | | 513571f2-7530-4be7-bc3e-66968a866aa6 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-04-16T10:20:05.000000 | 2026-04-16 10:20:11.524109 | orchestrator | | 3dfd95b4-aed1-4606-a552-2f306241baee | nova-compute | testbed-node-4 | nova | enabled | up | 2026-04-16T10:20:02.000000 | 2026-04-16 10:20:11.524118 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-04-16 10:20:11.801614 | orchestrator | + openstack hypervisor list 2026-04-16 10:20:14.315933 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-16 10:20:14.316062 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-04-16 10:20:14.316077 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-16 10:20:14.316088 | orchestrator | | 143cc446-be71-4704-abe3-ced7dfdfbbd7 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-04-16 10:20:14.316123 | orchestrator | | 628f7dce-e1f5-421c-9a4d-3edb027a67e0 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-04-16 10:20:14.316134 | orchestrator | | 4eb030e4-49b3-4ef5-99c5-eadbebccaf96 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-04-16 10:20:14.316144 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-04-16 10:20:14.549669 | orchestrator | 2026-04-16 10:20:14.549757 | orchestrator | # Run OpenStack test play 2026-04-16 10:20:14.549770 | orchestrator | 2026-04-16 10:20:14.549778 | orchestrator | + echo 2026-04-16 10:20:14.549786 | orchestrator | + echo '# Run OpenStack test play' 2026-04-16 10:20:14.549795 | orchestrator | + echo 2026-04-16 10:20:14.549803 | orchestrator | + osism apply --environment openstack test 2026-04-16 10:20:15.858860 | orchestrator | 2026-04-16 10:20:15 | INFO  | Trying to run play test in environment openstack 2026-04-16 10:20:25.903129 | orchestrator | 2026-04-16 10:20:25 | INFO  | Prepare task for execution of test. 2026-04-16 10:20:25.983074 | orchestrator | 2026-04-16 10:20:25 | INFO  | Task 661ebc3c-31e4-4941-ab8f-c0d1fe1b3d90 (test) was prepared for execution. 2026-04-16 10:20:25.983197 | orchestrator | 2026-04-16 10:20:25 | INFO  | It takes a moment until task 661ebc3c-31e4-4941-ab8f-c0d1fe1b3d90 (test) has been started and output is visible here. 2026-04-16 10:22:58.075178 | orchestrator | 2026-04-16 10:22:58.075286 | orchestrator | PLAY [Create test project] ***************************************************** 2026-04-16 10:22:58.075298 | orchestrator | 2026-04-16 10:22:58.075304 | orchestrator | TASK [Create test domain] ****************************************************** 2026-04-16 10:22:58.075311 | orchestrator | Thursday 16 April 2026 10:20:30 +0000 (0:00:01.412) 0:00:01.412 ******** 2026-04-16 10:22:58.075316 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075323 | orchestrator | 2026-04-16 10:22:58.075344 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-04-16 10:22:58.075372 | orchestrator | Thursday 16 April 2026 10:20:36 +0000 (0:00:05.606) 0:00:07.018 ******** 2026-04-16 10:22:58.075378 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075383 | orchestrator | 2026-04-16 10:22:58.075388 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-04-16 10:22:58.075394 | orchestrator | Thursday 16 April 2026 10:20:41 +0000 (0:00:05.010) 0:00:12.029 ******** 2026-04-16 10:22:58.075399 | orchestrator | changed: [localhost] 2026-04-16 10:22:58.075405 | orchestrator | 2026-04-16 10:22:58.075410 | orchestrator | TASK [Create test project] ***************************************************** 2026-04-16 10:22:58.075415 | orchestrator | Thursday 16 April 2026 10:20:49 +0000 (0:00:08.440) 0:00:20.470 ******** 2026-04-16 10:22:58.075420 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075426 | orchestrator | 2026-04-16 10:22:58.075453 | orchestrator | TASK [Create test user] ******************************************************** 2026-04-16 10:22:58.075459 | orchestrator | Thursday 16 April 2026 10:20:54 +0000 (0:00:04.905) 0:00:25.376 ******** 2026-04-16 10:22:58.075464 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075469 | orchestrator | 2026-04-16 10:22:58.075474 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-04-16 10:22:58.075479 | orchestrator | Thursday 16 April 2026 10:20:59 +0000 (0:00:05.017) 0:00:30.394 ******** 2026-04-16 10:22:58.075485 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-04-16 10:22:58.075490 | orchestrator | ok: [localhost] => (item=member) 2026-04-16 10:22:58.075496 | orchestrator | changed: [localhost] => (item=creator) 2026-04-16 10:22:58.075502 | orchestrator | 2026-04-16 10:22:58.075507 | orchestrator | TASK [Create test server group] ************************************************ 2026-04-16 10:22:58.075512 | orchestrator | Thursday 16 April 2026 10:21:12 +0000 (0:00:12.889) 0:00:43.283 ******** 2026-04-16 10:22:58.075517 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075522 | orchestrator | 2026-04-16 10:22:58.075527 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-04-16 10:22:58.075532 | orchestrator | Thursday 16 April 2026 10:21:17 +0000 (0:00:05.115) 0:00:48.398 ******** 2026-04-16 10:22:58.075557 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075566 | orchestrator | 2026-04-16 10:22:58.075575 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-04-16 10:22:58.075583 | orchestrator | Thursday 16 April 2026 10:21:22 +0000 (0:00:05.041) 0:00:53.440 ******** 2026-04-16 10:22:58.075592 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075601 | orchestrator | 2026-04-16 10:22:58.075609 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-04-16 10:22:58.075618 | orchestrator | Thursday 16 April 2026 10:21:28 +0000 (0:00:05.500) 0:00:58.941 ******** 2026-04-16 10:22:58.075626 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075635 | orchestrator | 2026-04-16 10:22:58.075644 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-04-16 10:22:58.075653 | orchestrator | Thursday 16 April 2026 10:21:32 +0000 (0:00:04.636) 0:01:03.577 ******** 2026-04-16 10:22:58.075661 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075670 | orchestrator | 2026-04-16 10:22:58.075679 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-04-16 10:22:58.075688 | orchestrator | Thursday 16 April 2026 10:21:37 +0000 (0:00:04.613) 0:01:08.191 ******** 2026-04-16 10:22:58.075696 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075705 | orchestrator | 2026-04-16 10:22:58.075713 | orchestrator | TASK [Create test networks] **************************************************** 2026-04-16 10:22:58.075723 | orchestrator | Thursday 16 April 2026 10:21:42 +0000 (0:00:04.850) 0:01:13.042 ******** 2026-04-16 10:22:58.075728 | orchestrator | ok: [localhost] => (item={'name': 'test-1'}) 2026-04-16 10:22:58.075734 | orchestrator | ok: [localhost] => (item={'name': 'test-2'}) 2026-04-16 10:22:58.075740 | orchestrator | ok: [localhost] => (item={'name': 'test-3'}) 2026-04-16 10:22:58.075746 | orchestrator | 2026-04-16 10:22:58.075752 | orchestrator | TASK [Create test subnets] ***************************************************** 2026-04-16 10:22:58.075758 | orchestrator | Thursday 16 April 2026 10:21:54 +0000 (0:00:12.382) 0:01:25.424 ******** 2026-04-16 10:22:58.075764 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'subnet': 'subnet-test-1', 'cidr': '192.168.200.0/24'}) 2026-04-16 10:22:58.075771 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'subnet': 'subnet-test-2', 'cidr': '192.168.201.0/24'}) 2026-04-16 10:22:58.075777 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'subnet': 'subnet-test-3', 'cidr': '192.168.202.0/24'}) 2026-04-16 10:22:58.075783 | orchestrator | 2026-04-16 10:22:58.075789 | orchestrator | TASK [Create test routers] ***************************************************** 2026-04-16 10:22:58.075795 | orchestrator | Thursday 16 April 2026 10:22:07 +0000 (0:00:12.750) 0:01:38.174 ******** 2026-04-16 10:22:58.075801 | orchestrator | ok: [localhost] => (item={'router': 'router-test-1', 'subnet': 'subnet-test-1'}) 2026-04-16 10:22:58.075807 | orchestrator | ok: [localhost] => (item={'router': 'router-test-2', 'subnet': 'subnet-test-2'}) 2026-04-16 10:22:58.075816 | orchestrator | ok: [localhost] => (item={'router': 'router-test-3', 'subnet': 'subnet-test-3'}) 2026-04-16 10:22:58.075824 | orchestrator | 2026-04-16 10:22:58.075832 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-04-16 10:22:58.075840 | orchestrator | 2026-04-16 10:22:58.075848 | orchestrator | TASK [Get test server group] *************************************************** 2026-04-16 10:22:58.075855 | orchestrator | Thursday 16 April 2026 10:22:22 +0000 (0:00:14.799) 0:01:52.973 ******** 2026-04-16 10:22:58.075864 | orchestrator | ok: [localhost] 2026-04-16 10:22:58.075872 | orchestrator | 2026-04-16 10:22:58.075896 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-04-16 10:22:58.075905 | orchestrator | Thursday 16 April 2026 10:22:26 +0000 (0:00:04.523) 0:01:57.496 ******** 2026-04-16 10:22:58.075914 | orchestrator | skipping: [localhost] 2026-04-16 10:22:58.075923 | orchestrator | 2026-04-16 10:22:58.075931 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-04-16 10:22:58.075939 | orchestrator | Thursday 16 April 2026 10:22:28 +0000 (0:00:01.097) 0:01:58.594 ******** 2026-04-16 10:22:58.075953 | orchestrator | skipping: [localhost] 2026-04-16 10:22:58.075959 | orchestrator | 2026-04-16 10:22:58.075964 | orchestrator | TASK [Delete test instances] *************************************************** 2026-04-16 10:22:58.075969 | orchestrator | Thursday 16 April 2026 10:22:29 +0000 (0:00:01.088) 0:01:59.682 ******** 2026-04-16 10:22:58.075974 | orchestrator | skipping: [localhost] => (item={'name': 'test-4', 'network': 'test-3'})  2026-04-16 10:22:58.075980 | orchestrator | skipping: [localhost] => (item={'name': 'test-3', 'network': 'test-2'})  2026-04-16 10:22:58.075985 | orchestrator | skipping: [localhost] => (item={'name': 'test-2', 'network': 'test-2'})  2026-04-16 10:22:58.075990 | orchestrator | skipping: [localhost] => (item={'name': 'test-1', 'network': 'test-1'})  2026-04-16 10:22:58.075995 | orchestrator | skipping: [localhost] => (item={'name': 'test', 'network': 'test-1'})  2026-04-16 10:22:58.076000 | orchestrator | skipping: [localhost] 2026-04-16 10:22:58.076005 | orchestrator | 2026-04-16 10:22:58.076010 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-04-16 10:22:58.076015 | orchestrator | Thursday 16 April 2026 10:22:30 +0000 (0:00:01.298) 0:02:00.981 ******** 2026-04-16 10:22:58.076021 | orchestrator | skipping: [localhost] 2026-04-16 10:22:58.076026 | orchestrator | 2026-04-16 10:22:58.076038 | orchestrator | TASK [Create test instances] *************************************************** 2026-04-16 10:22:58.076043 | orchestrator | Thursday 16 April 2026 10:22:31 +0000 (0:00:01.244) 0:02:02.226 ******** 2026-04-16 10:22:58.076048 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 10:22:58.076054 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 10:22:58.076059 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 10:22:58.076064 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 10:22:58.076069 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 10:22:58.076074 | orchestrator | 2026-04-16 10:22:58.076079 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-04-16 10:22:58.076084 | orchestrator | Thursday 16 April 2026 10:22:37 +0000 (0:00:05.480) 0:02:07.707 ******** 2026-04-16 10:22:58.076089 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-04-16 10:22:58.076096 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j366207508339.4082', 'results_file': '/ansible/.ansible_async/j366207508339.4082', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:22:58.076104 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j958927296955.4107', 'results_file': '/ansible/.ansible_async/j958927296955.4107', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:22:58.076110 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j107290965526.4132', 'results_file': '/ansible/.ansible_async/j107290965526.4132', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:22:58.076115 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j986996659750.4157', 'results_file': '/ansible/.ansible_async/j986996659750.4157', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:22:58.076120 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j949961233024.4182', 'results_file': '/ansible/.ansible_async/j949961233024.4182', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:22:58.076125 | orchestrator | 2026-04-16 10:22:58.076131 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-04-16 10:22:58.076141 | orchestrator | Thursday 16 April 2026 10:22:52 +0000 (0:00:15.526) 0:02:23.233 ******** 2026-04-16 10:22:58.076147 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 10:22:58.076152 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 10:22:58.076157 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 10:22:58.076162 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 10:22:58.076167 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 10:22:58.076172 | orchestrator | 2026-04-16 10:22:58.076177 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-04-16 10:22:58.076186 | orchestrator | Thursday 16 April 2026 10:22:58 +0000 (0:00:05.376) 0:02:28.610 ******** 2026-04-16 10:24:00.640496 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-04-16 10:24:00.640675 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j582246458717.4253', 'results_file': '/ansible/.ansible_async/j582246458717.4253', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640714 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j385510136119.4278', 'results_file': '/ansible/.ansible_async/j385510136119.4278', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640727 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j640463175492.4303', 'results_file': '/ansible/.ansible_async/j640463175492.4303', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640739 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j823378155855.4328', 'results_file': '/ansible/.ansible_async/j823378155855.4328', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640751 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j503529820648.4353', 'results_file': '/ansible/.ansible_async/j503529820648.4353', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640762 | orchestrator | 2026-04-16 10:24:00.640774 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-04-16 10:24:00.640786 | orchestrator | Thursday 16 April 2026 10:23:08 +0000 (0:00:10.299) 0:02:38.909 ******** 2026-04-16 10:24:00.640797 | orchestrator | changed: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 10:24:00.640808 | orchestrator | changed: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 10:24:00.640819 | orchestrator | changed: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 10:24:00.640830 | orchestrator | changed: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 10:24:00.640841 | orchestrator | changed: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 10:24:00.640851 | orchestrator | 2026-04-16 10:24:00.640863 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-04-16 10:24:00.640874 | orchestrator | Thursday 16 April 2026 10:23:13 +0000 (0:00:05.628) 0:02:44.538 ******** 2026-04-16 10:24:00.640885 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-04-16 10:24:00.640896 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j253141336502.4431', 'results_file': '/ansible/.ansible_async/j253141336502.4431', 'changed': True, 'item': {'name': 'test', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640907 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j883372511090.4456', 'results_file': '/ansible/.ansible_async/j883372511090.4456', 'changed': True, 'item': {'name': 'test-1', 'network': 'test-1'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640944 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j779850251211.4482', 'results_file': '/ansible/.ansible_async/j779850251211.4482', 'changed': True, 'item': {'name': 'test-2', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640956 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j951487992376.4508', 'results_file': '/ansible/.ansible_async/j951487992376.4508', 'changed': True, 'item': {'name': 'test-3', 'network': 'test-2'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640968 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j212907701177.4534', 'results_file': '/ansible/.ansible_async/j212907701177.4534', 'changed': True, 'item': {'name': 'test-4', 'network': 'test-3'}, 'ansible_loop_var': 'item'}) 2026-04-16 10:24:00.640978 | orchestrator | 2026-04-16 10:24:00.640990 | orchestrator | TASK [Create test volume] ****************************************************** 2026-04-16 10:24:00.641001 | orchestrator | Thursday 16 April 2026 10:23:24 +0000 (0:00:10.340) 0:02:54.879 ******** 2026-04-16 10:24:00.641012 | orchestrator | ok: [localhost] 2026-04-16 10:24:00.641024 | orchestrator | 2026-04-16 10:24:00.641038 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-04-16 10:24:00.641050 | orchestrator | Thursday 16 April 2026 10:23:29 +0000 (0:00:04.979) 0:02:59.859 ******** 2026-04-16 10:24:00.641082 | orchestrator | ok: [localhost] 2026-04-16 10:24:00.641102 | orchestrator | 2026-04-16 10:24:00.641120 | orchestrator | TASK [Create floating ip addresses] ******************************************** 2026-04-16 10:24:00.641149 | orchestrator | Thursday 16 April 2026 10:23:35 +0000 (0:00:06.008) 0:03:05.868 ******** 2026-04-16 10:24:00.641172 | orchestrator | ok: [localhost] => (item={'name': 'test', 'network': 'test-1'}) 2026-04-16 10:24:00.641190 | orchestrator | ok: [localhost] => (item={'name': 'test-1', 'network': 'test-1'}) 2026-04-16 10:24:00.641247 | orchestrator | ok: [localhost] => (item={'name': 'test-2', 'network': 'test-2'}) 2026-04-16 10:24:00.641265 | orchestrator | ok: [localhost] => (item={'name': 'test-3', 'network': 'test-2'}) 2026-04-16 10:24:00.641284 | orchestrator | ok: [localhost] => (item={'name': 'test-4', 'network': 'test-3'}) 2026-04-16 10:24:00.641302 | orchestrator | 2026-04-16 10:24:00.641319 | orchestrator | TASK [Print floating ip addresses] ********************************************* 2026-04-16 10:24:00.641335 | orchestrator | Thursday 16 April 2026 10:23:59 +0000 (0:00:23.901) 0:03:29.769 ******** 2026-04-16 10:24:00.641350 | orchestrator | ok: [localhost] => (item=test) => { 2026-04-16 10:24:00.641368 | orchestrator |  "msg": "test: 192.168.112.178" 2026-04-16 10:24:00.641385 | orchestrator | } 2026-04-16 10:24:00.641404 | orchestrator | ok: [localhost] => (item=test-1) => { 2026-04-16 10:24:00.641422 | orchestrator |  "msg": "test-1: 192.168.112.118" 2026-04-16 10:24:00.641439 | orchestrator | } 2026-04-16 10:24:00.641456 | orchestrator | ok: [localhost] => (item=test-2) => { 2026-04-16 10:24:00.641475 | orchestrator |  "msg": "test-2: 192.168.112.158" 2026-04-16 10:24:00.641492 | orchestrator | } 2026-04-16 10:24:00.641509 | orchestrator | ok: [localhost] => (item=test-3) => { 2026-04-16 10:24:00.641553 | orchestrator |  "msg": "test-3: 192.168.112.131" 2026-04-16 10:24:00.641571 | orchestrator | } 2026-04-16 10:24:00.641589 | orchestrator | ok: [localhost] => (item=test-4) => { 2026-04-16 10:24:00.641607 | orchestrator |  "msg": "test-4: 192.168.112.133" 2026-04-16 10:24:00.641624 | orchestrator | } 2026-04-16 10:24:00.641643 | orchestrator | 2026-04-16 10:24:00.641660 | orchestrator | PLAY RECAP ********************************************************************* 2026-04-16 10:24:00.641681 | orchestrator | localhost : ok=26  changed=8  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-04-16 10:24:00.641717 | orchestrator | 2026-04-16 10:24:00.641737 | orchestrator | 2026-04-16 10:24:00.641757 | orchestrator | TASKS RECAP ******************************************************************** 2026-04-16 10:24:00.641776 | orchestrator | Thursday 16 April 2026 10:24:00 +0000 (0:00:01.303) 0:03:31.072 ******** 2026-04-16 10:24:00.641794 | orchestrator | =============================================================================== 2026-04-16 10:24:00.641808 | orchestrator | Create floating ip addresses ------------------------------------------- 23.90s 2026-04-16 10:24:00.641819 | orchestrator | Wait for instance creation to complete --------------------------------- 15.53s 2026-04-16 10:24:00.641829 | orchestrator | Create test routers ---------------------------------------------------- 14.80s 2026-04-16 10:24:00.641840 | orchestrator | Add member roles to user test ------------------------------------------ 12.89s 2026-04-16 10:24:00.641851 | orchestrator | Create test subnets ---------------------------------------------------- 12.75s 2026-04-16 10:24:00.641862 | orchestrator | Create test networks --------------------------------------------------- 12.38s 2026-04-16 10:24:00.641872 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.34s 2026-04-16 10:24:00.641883 | orchestrator | Wait for metadata to be added ------------------------------------------ 10.30s 2026-04-16 10:24:00.641894 | orchestrator | Add manager role to user test-admin ------------------------------------- 8.44s 2026-04-16 10:24:00.641905 | orchestrator | Attach test volume ------------------------------------------------------ 6.01s 2026-04-16 10:24:00.641915 | orchestrator | Add tag to instances ---------------------------------------------------- 5.63s 2026-04-16 10:24:00.641926 | orchestrator | Create test domain ------------------------------------------------------ 5.61s 2026-04-16 10:24:00.641937 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.50s 2026-04-16 10:24:00.641947 | orchestrator | Create test instances --------------------------------------------------- 5.48s 2026-04-16 10:24:00.641958 | orchestrator | Add metadata to instances ----------------------------------------------- 5.38s 2026-04-16 10:24:00.641969 | orchestrator | Create test server group ------------------------------------------------ 5.12s 2026-04-16 10:24:00.641980 | orchestrator | Create ssh security group ----------------------------------------------- 5.04s 2026-04-16 10:24:00.641990 | orchestrator | Create test user -------------------------------------------------------- 5.02s 2026-04-16 10:24:00.642001 | orchestrator | Create test-admin user -------------------------------------------------- 5.01s 2026-04-16 10:24:00.642012 | orchestrator | Create test volume ------------------------------------------------------ 4.98s 2026-04-16 10:24:00.749051 | orchestrator | + server_list 2026-04-16 10:24:00.749152 | orchestrator | + openstack --os-cloud test server list 2026-04-16 10:24:04.254767 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-16 10:24:04.254840 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-04-16 10:24:04.254847 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-16 10:24:04.254853 | orchestrator | | 9f6d4eae-4a86-4bfc-8f3e-fcf038d2b61a | test-3 | ACTIVE | test-2=192.168.112.131, 192.168.201.6 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 10:24:04.254858 | orchestrator | | 65387079-4a6d-4b42-a28f-18ce145e99be | test-1 | ACTIVE | test-1=192.168.112.118, 192.168.200.117 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 10:24:04.254864 | orchestrator | | bf38d081-30f3-4ba0-bd0b-569f582e4d57 | test-2 | ACTIVE | test-2=192.168.112.158, 192.168.201.183 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 10:24:04.254869 | orchestrator | | f6a993c3-6e61-45fa-88ca-020d2ea97cc4 | test-4 | ACTIVE | test-3=192.168.112.133, 192.168.202.143 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 10:24:04.254891 | orchestrator | | f51d122f-e34a-402a-a9c1-9b7037551377 | test | ACTIVE | test-1=192.168.112.178, 192.168.200.155 | N/A (booted from volume) | SCS-1L-1 | 2026-04-16 10:24:04.254913 | orchestrator | +--------------------------------------+--------+--------+-----------------------------------------+--------------------------+----------+ 2026-04-16 10:24:04.411918 | orchestrator | + openstack --os-cloud test server show test 2026-04-16 10:24:07.496980 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:07.497108 | orchestrator | | Field | Value | 2026-04-16 10:24:07.497125 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:07.497137 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 10:24:07.497149 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 10:24:07.497161 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 10:24:07.497173 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-04-16 10:24:07.497184 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 10:24:07.497195 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 10:24:07.497258 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 10:24:07.497280 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 10:24:07.497299 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 10:24:07.497318 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 10:24:07.497335 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 10:24:07.497353 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 10:24:07.497371 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 10:24:07.497390 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 10:24:07.497410 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 10:24:07.497441 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:47.000000 | 2026-04-16 10:24:07.497480 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 10:24:07.497502 | orchestrator | | accessIPv4 | | 2026-04-16 10:24:07.497555 | orchestrator | | accessIPv6 | | 2026-04-16 10:24:07.497578 | orchestrator | | addresses | test-1=192.168.112.178, 192.168.200.155 | 2026-04-16 10:24:07.497599 | orchestrator | | config_drive | | 2026-04-16 10:24:07.497619 | orchestrator | | created | 2026-04-16T06:58:21Z | 2026-04-16 10:24:07.497632 | orchestrator | | description | None | 2026-04-16 10:24:07.497645 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 10:24:07.497659 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 10:24:07.497681 | orchestrator | | host_status | None | 2026-04-16 10:24:07.497703 | orchestrator | | id | f51d122f-e34a-402a-a9c1-9b7037551377 | 2026-04-16 10:24:07.497716 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 10:24:07.497730 | orchestrator | | key_name | test | 2026-04-16 10:24:07.497743 | orchestrator | | locked | False | 2026-04-16 10:24:07.497757 | orchestrator | | locked_reason | None | 2026-04-16 10:24:07.497770 | orchestrator | | name | test | 2026-04-16 10:24:07.497783 | orchestrator | | pinned_availability_zone | None | 2026-04-16 10:24:07.498248 | orchestrator | | progress | 0 | 2026-04-16 10:24:07.498278 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 10:24:07.498289 | orchestrator | | properties | hostname='test' | 2026-04-16 10:24:07.498311 | orchestrator | | security_groups | name='icmp' | 2026-04-16 10:24:07.498323 | orchestrator | | | name='ssh' | 2026-04-16 10:24:07.498335 | orchestrator | | server_groups | None | 2026-04-16 10:24:07.498346 | orchestrator | | status | ACTIVE | 2026-04-16 10:24:07.498357 | orchestrator | | tags | test | 2026-04-16 10:24:07.498369 | orchestrator | | trusted_image_certificates | None | 2026-04-16 10:24:07.498380 | orchestrator | | updated | 2026-04-16T10:22:58Z | 2026-04-16 10:24:07.498403 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 10:24:07.498415 | orchestrator | | volumes_attached | delete_on_termination='True', id='be2fc687-3ec6-4504-a718-ce1c777b157a' | 2026-04-16 10:24:07.498426 | orchestrator | | | delete_on_termination='False', id='7182495a-af96-4f93-b9db-43724d69937e' | 2026-04-16 10:24:07.499629 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:07.655034 | orchestrator | + openstack --os-cloud test server show test-1 2026-04-16 10:24:10.351667 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:10.351756 | orchestrator | | Field | Value | 2026-04-16 10:24:10.351768 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:10.351779 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 10:24:10.351790 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 10:24:10.351835 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 10:24:10.351846 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-04-16 10:24:10.351855 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 10:24:10.351866 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 10:24:10.351893 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 10:24:10.351905 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 10:24:10.351916 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 10:24:10.351926 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 10:24:10.352043 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 10:24:10.352059 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 10:24:10.352071 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 10:24:10.352078 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 10:24:10.352084 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 10:24:10.352091 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:46.000000 | 2026-04-16 10:24:10.352105 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 10:24:10.352112 | orchestrator | | accessIPv4 | | 2026-04-16 10:24:10.352118 | orchestrator | | accessIPv6 | | 2026-04-16 10:24:10.352125 | orchestrator | | addresses | test-1=192.168.112.118, 192.168.200.117 | 2026-04-16 10:24:10.352132 | orchestrator | | config_drive | | 2026-04-16 10:24:10.352144 | orchestrator | | created | 2026-04-16T06:58:22Z | 2026-04-16 10:24:10.352155 | orchestrator | | description | None | 2026-04-16 10:24:10.352161 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 10:24:10.352168 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 10:24:10.352174 | orchestrator | | host_status | None | 2026-04-16 10:24:10.352186 | orchestrator | | id | 65387079-4a6d-4b42-a28f-18ce145e99be | 2026-04-16 10:24:10.352192 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 10:24:10.352199 | orchestrator | | key_name | test | 2026-04-16 10:24:10.352205 | orchestrator | | locked | False | 2026-04-16 10:24:10.352216 | orchestrator | | locked_reason | None | 2026-04-16 10:24:10.352224 | orchestrator | | name | test-1 | 2026-04-16 10:24:10.352234 | orchestrator | | pinned_availability_zone | None | 2026-04-16 10:24:10.352242 | orchestrator | | progress | 0 | 2026-04-16 10:24:10.352249 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 10:24:10.352257 | orchestrator | | properties | hostname='test-1' | 2026-04-16 10:24:10.352269 | orchestrator | | security_groups | name='icmp' | 2026-04-16 10:24:10.352276 | orchestrator | | | name='ssh' | 2026-04-16 10:24:10.352284 | orchestrator | | server_groups | None | 2026-04-16 10:24:10.352295 | orchestrator | | status | ACTIVE | 2026-04-16 10:24:10.352303 | orchestrator | | tags | test | 2026-04-16 10:24:10.352310 | orchestrator | | trusted_image_certificates | None | 2026-04-16 10:24:10.352320 | orchestrator | | updated | 2026-04-16T10:22:58Z | 2026-04-16 10:24:10.352365 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 10:24:10.352372 | orchestrator | | volumes_attached | delete_on_termination='True', id='2b50d504-0938-40ca-b5f4-01df85085085' | 2026-04-16 10:24:10.355947 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:10.538301 | orchestrator | + openstack --os-cloud test server show test-2 2026-04-16 10:24:13.352726 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:13.352825 | orchestrator | | Field | Value | 2026-04-16 10:24:13.352860 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:13.352871 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 10:24:13.352881 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 10:24:13.352890 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 10:24:13.352900 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-04-16 10:24:13.352909 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 10:24:13.352918 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 10:24:13.352943 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 10:24:13.353020 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 10:24:13.353036 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 10:24:13.353053 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 10:24:13.353062 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 10:24:13.353071 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 10:24:13.353080 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 10:24:13.353093 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 10:24:13.353102 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 10:24:13.353112 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:49.000000 | 2026-04-16 10:24:13.353128 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 10:24:13.353137 | orchestrator | | accessIPv4 | | 2026-04-16 10:24:13.353152 | orchestrator | | accessIPv6 | | 2026-04-16 10:24:13.353161 | orchestrator | | addresses | test-2=192.168.112.158, 192.168.201.183 | 2026-04-16 10:24:13.353170 | orchestrator | | config_drive | | 2026-04-16 10:24:13.353179 | orchestrator | | created | 2026-04-16T06:58:22Z | 2026-04-16 10:24:13.353188 | orchestrator | | description | None | 2026-04-16 10:24:13.353201 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 10:24:13.353211 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 10:24:13.353220 | orchestrator | | host_status | None | 2026-04-16 10:24:13.353235 | orchestrator | | id | bf38d081-30f3-4ba0-bd0b-569f582e4d57 | 2026-04-16 10:24:13.353250 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 10:24:13.353259 | orchestrator | | key_name | test | 2026-04-16 10:24:13.353270 | orchestrator | | locked | False | 2026-04-16 10:24:13.353281 | orchestrator | | locked_reason | None | 2026-04-16 10:24:13.353291 | orchestrator | | name | test-2 | 2026-04-16 10:24:13.353305 | orchestrator | | pinned_availability_zone | None | 2026-04-16 10:24:13.353315 | orchestrator | | progress | 0 | 2026-04-16 10:24:13.353326 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 10:24:13.353336 | orchestrator | | properties | hostname='test-2' | 2026-04-16 10:24:13.353358 | orchestrator | | security_groups | name='icmp' | 2026-04-16 10:24:13.353368 | orchestrator | | | name='ssh' | 2026-04-16 10:24:13.353379 | orchestrator | | server_groups | None | 2026-04-16 10:24:13.353389 | orchestrator | | status | ACTIVE | 2026-04-16 10:24:13.353399 | orchestrator | | tags | test | 2026-04-16 10:24:13.353410 | orchestrator | | trusted_image_certificates | None | 2026-04-16 10:24:13.353424 | orchestrator | | updated | 2026-04-16T10:22:59Z | 2026-04-16 10:24:13.353434 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 10:24:13.353445 | orchestrator | | volumes_attached | delete_on_termination='True', id='cac65a6c-1904-4384-9ed3-01b5feac6425' | 2026-04-16 10:24:13.356604 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:13.581753 | orchestrator | + openstack --os-cloud test server show test-3 2026-04-16 10:24:16.465112 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:16.465242 | orchestrator | | Field | Value | 2026-04-16 10:24:16.465268 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:16.465288 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 10:24:16.465308 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 10:24:16.465327 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 10:24:16.465403 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-04-16 10:24:16.465428 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 10:24:16.465447 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 10:24:16.465524 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 10:24:16.465583 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 10:24:16.465597 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 10:24:16.465610 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 10:24:16.465623 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 10:24:16.465636 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 10:24:16.465649 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 10:24:16.465669 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 10:24:16.465683 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 10:24:16.465704 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:49.000000 | 2026-04-16 10:24:16.465728 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 10:24:16.465748 | orchestrator | | accessIPv4 | | 2026-04-16 10:24:16.465770 | orchestrator | | accessIPv6 | | 2026-04-16 10:24:16.465791 | orchestrator | | addresses | test-2=192.168.112.131, 192.168.201.6 | 2026-04-16 10:24:16.465808 | orchestrator | | config_drive | | 2026-04-16 10:24:16.465828 | orchestrator | | created | 2026-04-16T06:58:23Z | 2026-04-16 10:24:16.465841 | orchestrator | | description | None | 2026-04-16 10:24:16.465860 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 10:24:16.465880 | orchestrator | | hostId | 4544a6d235a3a35eb0870b8bd6995d405d7ec82b14225f1eab019f4a | 2026-04-16 10:24:16.465893 | orchestrator | | host_status | None | 2026-04-16 10:24:16.465914 | orchestrator | | id | 9f6d4eae-4a86-4bfc-8f3e-fcf038d2b61a | 2026-04-16 10:24:16.465928 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 10:24:16.465941 | orchestrator | | key_name | test | 2026-04-16 10:24:16.465953 | orchestrator | | locked | False | 2026-04-16 10:24:16.465966 | orchestrator | | locked_reason | None | 2026-04-16 10:24:16.465979 | orchestrator | | name | test-3 | 2026-04-16 10:24:16.465990 | orchestrator | | pinned_availability_zone | None | 2026-04-16 10:24:16.466008 | orchestrator | | progress | 0 | 2026-04-16 10:24:16.466063 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 10:24:16.466076 | orchestrator | | properties | hostname='test-3' | 2026-04-16 10:24:16.466096 | orchestrator | | security_groups | name='icmp' | 2026-04-16 10:24:16.466108 | orchestrator | | | name='ssh' | 2026-04-16 10:24:16.466119 | orchestrator | | server_groups | None | 2026-04-16 10:24:16.466130 | orchestrator | | status | ACTIVE | 2026-04-16 10:24:16.466141 | orchestrator | | tags | test | 2026-04-16 10:24:16.466185 | orchestrator | | trusted_image_certificates | None | 2026-04-16 10:24:16.466198 | orchestrator | | updated | 2026-04-16T10:23:00Z | 2026-04-16 10:24:16.466222 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 10:24:16.466234 | orchestrator | | volumes_attached | delete_on_termination='True', id='b06a5d91-b80d-45ef-9265-78ba7fb45521' | 2026-04-16 10:24:16.469117 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:16.699648 | orchestrator | + openstack --os-cloud test server show test-4 2026-04-16 10:24:19.634769 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:19.634901 | orchestrator | | Field | Value | 2026-04-16 10:24:19.634933 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:19.634954 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-04-16 10:24:19.634974 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-04-16 10:24:19.634995 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-04-16 10:24:19.635041 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-04-16 10:24:19.635098 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-04-16 10:24:19.635120 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-04-16 10:24:19.635163 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-04-16 10:24:19.635176 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-04-16 10:24:19.635188 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-04-16 10:24:19.635199 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-04-16 10:24:19.635210 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-04-16 10:24:19.635221 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-04-16 10:24:19.635242 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-04-16 10:24:19.635259 | orchestrator | | OS-EXT-STS:task_state | None | 2026-04-16 10:24:19.635271 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-04-16 10:24:19.635282 | orchestrator | | OS-SRV-USG:launched_at | 2026-04-16T06:58:49.000000 | 2026-04-16 10:24:19.635302 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-04-16 10:24:19.635316 | orchestrator | | accessIPv4 | | 2026-04-16 10:24:19.635329 | orchestrator | | accessIPv6 | | 2026-04-16 10:24:19.635343 | orchestrator | | addresses | test-3=192.168.112.133, 192.168.202.143 | 2026-04-16 10:24:19.635356 | orchestrator | | config_drive | | 2026-04-16 10:24:19.635381 | orchestrator | | created | 2026-04-16T06:58:22Z | 2026-04-16 10:24:19.635398 | orchestrator | | description | None | 2026-04-16 10:24:19.635417 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-04-16 10:24:19.635430 | orchestrator | | hostId | 7e3223e2e97baa479d4cff3cabd81cd76ae80444e404d98038219cac | 2026-04-16 10:24:19.635442 | orchestrator | | host_status | None | 2026-04-16 10:24:19.635462 | orchestrator | | id | f6a993c3-6e61-45fa-88ca-020d2ea97cc4 | 2026-04-16 10:24:19.635476 | orchestrator | | image | N/A (booted from volume) | 2026-04-16 10:24:19.635490 | orchestrator | | key_name | test | 2026-04-16 10:24:19.635502 | orchestrator | | locked | False | 2026-04-16 10:24:19.635522 | orchestrator | | locked_reason | None | 2026-04-16 10:24:19.635566 | orchestrator | | name | test-4 | 2026-04-16 10:24:19.635581 | orchestrator | | pinned_availability_zone | None | 2026-04-16 10:24:19.635608 | orchestrator | | progress | 0 | 2026-04-16 10:24:19.635629 | orchestrator | | project_id | 7cc2e55b0fc7451691d9affecd2ed105 | 2026-04-16 10:24:19.635648 | orchestrator | | properties | hostname='test-4' | 2026-04-16 10:24:19.635680 | orchestrator | | security_groups | name='icmp' | 2026-04-16 10:24:19.635699 | orchestrator | | | name='ssh' | 2026-04-16 10:24:19.635718 | orchestrator | | server_groups | None | 2026-04-16 10:24:19.635737 | orchestrator | | status | ACTIVE | 2026-04-16 10:24:19.635769 | orchestrator | | tags | test | 2026-04-16 10:24:19.635789 | orchestrator | | trusted_image_certificates | None | 2026-04-16 10:24:19.635809 | orchestrator | | updated | 2026-04-16T10:23:00Z | 2026-04-16 10:24:19.635835 | orchestrator | | user_id | 67e72a90634c4772ac688d413b6057f1 | 2026-04-16 10:24:19.635856 | orchestrator | | volumes_attached | delete_on_termination='True', id='dd128722-60bf-4529-8019-8b68aa8c4eba' | 2026-04-16 10:24:19.639147 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-04-16 10:24:19.866625 | orchestrator | + server_ping 2026-04-16 10:24:19.867918 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-04-16 10:24:19.867985 | orchestrator | ++ tr -d '\r' 2026-04-16 10:24:22.561307 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 10:24:22.561405 | orchestrator | + ping -c3 192.168.112.158 2026-04-16 10:24:22.574261 | orchestrator | PING 192.168.112.158 (192.168.112.158) 56(84) bytes of data. 2026-04-16 10:24:22.574357 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=1 ttl=63 time=6.58 ms 2026-04-16 10:24:23.572656 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=2 ttl=63 time=2.86 ms 2026-04-16 10:24:24.572809 | orchestrator | 64 bytes from 192.168.112.158: icmp_seq=3 ttl=63 time=1.55 ms 2026-04-16 10:24:24.572915 | orchestrator | 2026-04-16 10:24:24.572926 | orchestrator | --- 192.168.112.158 ping statistics --- 2026-04-16 10:24:24.572935 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 10:24:24.572943 | orchestrator | rtt min/avg/max/mdev = 1.553/3.666/6.584/2.131 ms 2026-04-16 10:24:24.573324 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 10:24:24.573345 | orchestrator | + ping -c3 192.168.112.118 2026-04-16 10:24:24.585372 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-04-16 10:24:24.585459 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=7.04 ms 2026-04-16 10:24:25.581867 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=2.08 ms 2026-04-16 10:24:26.582843 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=1.60 ms 2026-04-16 10:24:26.582933 | orchestrator | 2026-04-16 10:24:26.582944 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-04-16 10:24:26.582952 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 10:24:26.582958 | orchestrator | rtt min/avg/max/mdev = 1.604/3.572/7.035/2.456 ms 2026-04-16 10:24:26.582977 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 10:24:26.582985 | orchestrator | + ping -c3 192.168.112.131 2026-04-16 10:24:26.594934 | orchestrator | PING 192.168.112.131 (192.168.112.131) 56(84) bytes of data. 2026-04-16 10:24:26.595005 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=1 ttl=63 time=7.20 ms 2026-04-16 10:24:27.591365 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=2 ttl=63 time=2.16 ms 2026-04-16 10:24:28.591765 | orchestrator | 64 bytes from 192.168.112.131: icmp_seq=3 ttl=63 time=1.32 ms 2026-04-16 10:24:28.592063 | orchestrator | 2026-04-16 10:24:28.592082 | orchestrator | --- 192.168.112.131 ping statistics --- 2026-04-16 10:24:28.592095 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-04-16 10:24:28.592106 | orchestrator | rtt min/avg/max/mdev = 1.316/3.558/7.196/2.595 ms 2026-04-16 10:24:28.592132 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 10:24:28.592144 | orchestrator | + ping -c3 192.168.112.178 2026-04-16 10:24:28.608363 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2026-04-16 10:24:28.608479 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=11.4 ms 2026-04-16 10:24:29.601707 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.79 ms 2026-04-16 10:24:30.602729 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=1.84 ms 2026-04-16 10:24:30.602815 | orchestrator | 2026-04-16 10:24:30.602825 | orchestrator | --- 192.168.112.178 ping statistics --- 2026-04-16 10:24:30.602832 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 10:24:30.602837 | orchestrator | rtt min/avg/max/mdev = 1.835/5.350/11.430/4.316 ms 2026-04-16 10:24:30.602842 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-04-16 10:24:30.602847 | orchestrator | + ping -c3 192.168.112.133 2026-04-16 10:24:30.615692 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2026-04-16 10:24:30.615825 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=9.09 ms 2026-04-16 10:24:31.610633 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.66 ms 2026-04-16 10:24:32.611805 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=2.00 ms 2026-04-16 10:24:32.612805 | orchestrator | 2026-04-16 10:24:32.612846 | orchestrator | --- 192.168.112.133 ping statistics --- 2026-04-16 10:24:32.612860 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-04-16 10:24:32.612873 | orchestrator | rtt min/avg/max/mdev = 1.995/4.582/9.091/3.199 ms 2026-04-16 10:24:32.612899 | orchestrator | + [[ 10.0.0 == \l\a\t\e\s\t ]] 2026-04-16 10:24:32.886213 | orchestrator | ok: Runtime: 0:09:54.421636 2026-04-16 10:24:32.950959 | 2026-04-16 10:24:32.951052 | PLAY RECAP 2026-04-16 10:24:32.951104 | orchestrator | ok: 32 changed: 13 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-16 10:24:32.951135 | 2026-04-16 10:24:33.424789 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-04-16 10:24:33.428041 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-16 10:24:34.725274 | 2026-04-16 10:24:34.725443 | PLAY [Post output play] 2026-04-16 10:24:34.751990 | 2026-04-16 10:24:34.752126 | LOOP [stage-output : Register sources] 2026-04-16 10:24:34.810720 | 2026-04-16 10:24:34.811020 | TASK [stage-output : Check sudo] 2026-04-16 10:24:35.862759 | orchestrator | sudo: a password is required 2026-04-16 10:24:35.903497 | orchestrator | ok: Runtime: 0:00:00.015948 2026-04-16 10:24:35.912183 | 2026-04-16 10:24:35.912280 | LOOP [stage-output : Set source and destination for files and folders] 2026-04-16 10:24:35.953036 | 2026-04-16 10:24:35.953218 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-04-16 10:24:36.001772 | orchestrator | ok 2026-04-16 10:24:36.012788 | 2026-04-16 10:24:36.012898 | LOOP [stage-output : Ensure target folders exist] 2026-04-16 10:24:36.528949 | orchestrator | ok: "docs" 2026-04-16 10:24:36.529175 | 2026-04-16 10:24:36.794347 | orchestrator | ok: "artifacts" 2026-04-16 10:24:37.069070 | orchestrator | ok: "logs" 2026-04-16 10:24:37.089203 | 2026-04-16 10:24:37.089376 | LOOP [stage-output : Copy files and folders to staging folder] 2026-04-16 10:24:37.121706 | 2026-04-16 10:24:37.121892 | TASK [stage-output : Make all log files readable] 2026-04-16 10:24:37.514060 | orchestrator | ok 2026-04-16 10:24:37.523364 | 2026-04-16 10:24:37.523472 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-04-16 10:24:37.564117 | orchestrator | skipping: Conditional result was False 2026-04-16 10:24:37.578024 | 2026-04-16 10:24:37.578138 | TASK [stage-output : Discover log files for compression] 2026-04-16 10:24:37.591026 | orchestrator | skipping: Conditional result was False 2026-04-16 10:24:37.597939 | 2026-04-16 10:24:37.598024 | LOOP [stage-output : Archive everything from logs] 2026-04-16 10:24:37.619711 | 2026-04-16 10:24:37.619830 | PLAY [Post cleanup play] 2026-04-16 10:24:37.629460 | 2026-04-16 10:24:37.629549 | TASK [Set cloud fact (Zuul deployment)] 2026-04-16 10:24:37.667160 | orchestrator | ok 2026-04-16 10:24:37.673325 | 2026-04-16 10:24:37.673402 | TASK [Set cloud fact (local deployment)] 2026-04-16 10:24:37.695859 | orchestrator | skipping: Conditional result was False 2026-04-16 10:24:37.701934 | 2026-04-16 10:24:37.702007 | TASK [Clean the cloud environment] 2026-04-16 10:24:38.297637 | orchestrator | 2026-04-16 10:24:38 - clean up servers 2026-04-16 10:24:39.076915 | orchestrator | 2026-04-16 10:24:39 - testbed-manager 2026-04-16 10:24:39.172737 | orchestrator | 2026-04-16 10:24:39 - testbed-node-0 2026-04-16 10:24:39.257384 | orchestrator | 2026-04-16 10:24:39 - testbed-node-4 2026-04-16 10:24:39.356846 | orchestrator | 2026-04-16 10:24:39 - testbed-node-2 2026-04-16 10:24:39.458693 | orchestrator | 2026-04-16 10:24:39 - testbed-node-3 2026-04-16 10:24:39.558749 | orchestrator | 2026-04-16 10:24:39 - testbed-node-5 2026-04-16 10:24:39.651977 | orchestrator | 2026-04-16 10:24:39 - testbed-node-1 2026-04-16 10:24:39.749365 | orchestrator | 2026-04-16 10:24:39 - clean up keypairs 2026-04-16 10:24:39.769970 | orchestrator | 2026-04-16 10:24:39 - testbed 2026-04-16 10:24:39.797614 | orchestrator | 2026-04-16 10:24:39 - wait for servers to be gone 2026-04-16 10:24:50.688104 | orchestrator | 2026-04-16 10:24:50 - clean up ports 2026-04-16 10:24:50.916655 | orchestrator | 2026-04-16 10:24:50 - 08b8dc7b-93d2-49e0-ad31-b0ed4f7923f8 2026-04-16 10:24:51.203099 | orchestrator | 2026-04-16 10:24:51 - 2e72b957-5000-450d-9109-792b91250723 2026-04-16 10:24:51.462951 | orchestrator | 2026-04-16 10:24:51 - 74d7e519-07a4-4c85-bc50-aef8ccca1b3b 2026-04-16 10:24:51.955891 | orchestrator | 2026-04-16 10:24:51 - 7a6e62f9-2274-48d2-a4da-77c5bdd2f224 2026-04-16 10:24:52.270178 | orchestrator | 2026-04-16 10:24:52 - 85783b00-10ff-48c4-adf4-e860e28b9415 2026-04-16 10:24:52.486284 | orchestrator | 2026-04-16 10:24:52 - b1756d93-3b54-4e44-ab25-94293c191fc9 2026-04-16 10:24:52.789832 | orchestrator | 2026-04-16 10:24:52 - e5d5c799-887a-4f68-9d37-283a95554adb 2026-04-16 10:24:53.156747 | orchestrator | 2026-04-16 10:24:53 - clean up volumes 2026-04-16 10:24:53.286258 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-5-node-base 2026-04-16 10:24:53.332452 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-4-node-base 2026-04-16 10:24:53.383524 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-1-node-base 2026-04-16 10:24:53.440411 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-2-node-base 2026-04-16 10:24:53.517445 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-3-node-base 2026-04-16 10:24:53.569060 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-0-node-base 2026-04-16 10:24:53.628932 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-manager-base 2026-04-16 10:24:53.690854 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-5-node-5 2026-04-16 10:24:53.744731 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-8-node-5 2026-04-16 10:24:53.796471 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-1-node-4 2026-04-16 10:24:53.847066 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-3-node-3 2026-04-16 10:24:53.897378 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-7-node-4 2026-04-16 10:24:53.946310 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-0-node-3 2026-04-16 10:24:53.997212 | orchestrator | 2026-04-16 10:24:53 - testbed-volume-4-node-4 2026-04-16 10:24:54.053476 | orchestrator | 2026-04-16 10:24:54 - testbed-volume-6-node-3 2026-04-16 10:24:54.106086 | orchestrator | 2026-04-16 10:24:54 - testbed-volume-2-node-5 2026-04-16 10:24:54.155210 | orchestrator | 2026-04-16 10:24:54 - disconnect routers 2026-04-16 10:24:54.850545 | orchestrator | 2026-04-16 10:24:54 - testbed 2026-04-16 10:24:56.065183 | orchestrator | 2026-04-16 10:24:56 - clean up subnets 2026-04-16 10:24:56.109107 | orchestrator | 2026-04-16 10:24:56 - subnet-testbed-management 2026-04-16 10:24:56.318562 | orchestrator | 2026-04-16 10:24:56 - clean up networks 2026-04-16 10:24:56.551730 | orchestrator | 2026-04-16 10:24:56 - net-testbed-management 2026-04-16 10:24:56.870346 | orchestrator | 2026-04-16 10:24:56 - clean up security groups 2026-04-16 10:24:56.925160 | orchestrator | 2026-04-16 10:24:56 - testbed-management 2026-04-16 10:24:57.077733 | orchestrator | 2026-04-16 10:24:57 - testbed-node 2026-04-16 10:24:57.204085 | orchestrator | 2026-04-16 10:24:57 - clean up floating ips 2026-04-16 10:24:57.245059 | orchestrator | 2026-04-16 10:24:57 - 81.163.193.2 2026-04-16 10:24:57.693995 | orchestrator | 2026-04-16 10:24:57 - clean up routers 2026-04-16 10:24:57.760342 | orchestrator | 2026-04-16 10:24:57 - testbed 2026-04-16 10:24:58.799716 | orchestrator | ok: Runtime: 0:00:20.756668 2026-04-16 10:24:58.802893 | 2026-04-16 10:24:58.803020 | PLAY RECAP 2026-04-16 10:24:58.803111 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-04-16 10:24:58.803154 | 2026-04-16 10:24:58.979842 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-04-16 10:24:58.980855 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-16 10:24:59.768156 | 2026-04-16 10:24:59.768361 | PLAY [Cleanup play] 2026-04-16 10:24:59.788726 | 2026-04-16 10:24:59.789515 | TASK [Set cloud fact (Zuul deployment)] 2026-04-16 10:24:59.830696 | orchestrator | ok 2026-04-16 10:24:59.837705 | 2026-04-16 10:24:59.837848 | TASK [Set cloud fact (local deployment)] 2026-04-16 10:24:59.862337 | orchestrator | skipping: Conditional result was False 2026-04-16 10:24:59.872973 | 2026-04-16 10:24:59.873138 | TASK [Clean the cloud environment] 2026-04-16 10:25:00.964732 | orchestrator | 2026-04-16 10:25:00 - clean up servers 2026-04-16 10:25:01.624974 | orchestrator | 2026-04-16 10:25:01 - clean up keypairs 2026-04-16 10:25:01.641073 | orchestrator | 2026-04-16 10:25:01 - wait for servers to be gone 2026-04-16 10:25:01.692678 | orchestrator | 2026-04-16 10:25:01 - clean up ports 2026-04-16 10:25:01.780044 | orchestrator | 2026-04-16 10:25:01 - clean up volumes 2026-04-16 10:25:01.860531 | orchestrator | 2026-04-16 10:25:01 - disconnect routers 2026-04-16 10:25:01.907883 | orchestrator | 2026-04-16 10:25:01 - clean up subnets 2026-04-16 10:25:01.928558 | orchestrator | 2026-04-16 10:25:01 - clean up networks 2026-04-16 10:25:02.136329 | orchestrator | 2026-04-16 10:25:02 - clean up security groups 2026-04-16 10:25:02.183708 | orchestrator | 2026-04-16 10:25:02 - clean up floating ips 2026-04-16 10:25:02.216884 | orchestrator | 2026-04-16 10:25:02 - clean up routers 2026-04-16 10:25:02.417515 | orchestrator | ok: Runtime: 0:00:01.636097 2026-04-16 10:25:02.421340 | 2026-04-16 10:25:02.421478 | PLAY RECAP 2026-04-16 10:25:02.421579 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-04-16 10:25:02.421631 | 2026-04-16 10:25:02.569013 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-04-16 10:25:02.570027 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-16 10:25:03.377860 | 2026-04-16 10:25:03.378023 | PLAY [Base post-fetch] 2026-04-16 10:25:03.394557 | 2026-04-16 10:25:03.394700 | TASK [fetch-output : Set log path for multiple nodes] 2026-04-16 10:25:03.450976 | orchestrator | skipping: Conditional result was False 2026-04-16 10:25:03.458701 | 2026-04-16 10:25:03.458917 | TASK [fetch-output : Set log path for single node] 2026-04-16 10:25:03.500610 | orchestrator | ok 2026-04-16 10:25:03.506814 | 2026-04-16 10:25:03.507525 | LOOP [fetch-output : Ensure local output dirs] 2026-04-16 10:25:04.060419 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/work/logs" 2026-04-16 10:25:04.411102 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/work/artifacts" 2026-04-16 10:25:04.830458 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/219a5aa2066345788719ba53c87e0c69/work/docs" 2026-04-16 10:25:04.844895 | 2026-04-16 10:25:04.845120 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-04-16 10:25:05.743385 | orchestrator | changed: .d..t...... ./ 2026-04-16 10:25:05.743748 | orchestrator | changed: All items complete 2026-04-16 10:25:05.743803 | 2026-04-16 10:25:06.461786 | orchestrator | changed: .d..t...... ./ 2026-04-16 10:25:07.196770 | orchestrator | changed: .d..t...... ./ 2026-04-16 10:25:07.221195 | 2026-04-16 10:25:07.221367 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-04-16 10:25:07.249888 | orchestrator | skipping: Conditional result was False 2026-04-16 10:25:07.253129 | orchestrator | skipping: Conditional result was False 2026-04-16 10:25:07.264083 | 2026-04-16 10:25:07.264171 | PLAY RECAP 2026-04-16 10:25:07.264227 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-04-16 10:25:07.264255 | 2026-04-16 10:25:07.392915 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-04-16 10:25:07.394358 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-16 10:25:08.149458 | 2026-04-16 10:25:08.149634 | PLAY [Base post] 2026-04-16 10:25:08.164433 | 2026-04-16 10:25:08.164582 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-04-16 10:25:09.161524 | orchestrator | changed 2026-04-16 10:25:09.174641 | 2026-04-16 10:25:09.174892 | PLAY RECAP 2026-04-16 10:25:09.175019 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-04-16 10:25:09.175117 | 2026-04-16 10:25:09.312476 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-04-16 10:25:09.313524 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-04-16 10:25:10.125405 | 2026-04-16 10:25:10.125595 | PLAY [Base post-logs] 2026-04-16 10:25:10.137539 | 2026-04-16 10:25:10.137697 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-04-16 10:25:10.685193 | localhost | changed 2026-04-16 10:25:10.697011 | 2026-04-16 10:25:10.697190 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-04-16 10:25:10.734154 | localhost | ok 2026-04-16 10:25:10.737569 | 2026-04-16 10:25:10.737681 | TASK [Set zuul-log-path fact] 2026-04-16 10:25:10.753216 | localhost | ok 2026-04-16 10:25:10.763374 | 2026-04-16 10:25:10.763496 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-04-16 10:25:10.788337 | localhost | ok 2026-04-16 10:25:10.791539 | 2026-04-16 10:25:10.791646 | TASK [upload-logs : Create log directories] 2026-04-16 10:25:11.340674 | localhost | changed 2026-04-16 10:25:11.343660 | 2026-04-16 10:25:11.343775 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-04-16 10:25:12.106365 | localhost -> localhost | ok: Runtime: 0:00:00.008037 2026-04-16 10:25:12.110980 | 2026-04-16 10:25:12.111112 | TASK [upload-logs : Upload logs to log server] 2026-04-16 10:25:12.687818 | localhost | Output suppressed because no_log was given 2026-04-16 10:25:12.691278 | 2026-04-16 10:25:12.691442 | LOOP [upload-logs : Compress console log and json output] 2026-04-16 10:25:12.747172 | localhost | skipping: Conditional result was False 2026-04-16 10:25:12.752458 | localhost | skipping: Conditional result was False 2026-04-16 10:25:12.756846 | 2026-04-16 10:25:12.756952 | LOOP [upload-logs : Upload compressed console log and json output] 2026-04-16 10:25:12.816850 | localhost | skipping: Conditional result was False 2026-04-16 10:25:12.817300 | 2026-04-16 10:25:12.822560 | localhost | skipping: Conditional result was False 2026-04-16 10:25:12.836747 | 2026-04-16 10:25:12.836983 | LOOP [upload-logs : Upload console log and json output]